Model Deployment : Classifying Brain Tumors from Magnetic Resonance Images by Leveraging Convolutional Neural Network-Based Multilevel Feature Extraction and Hierarchical Representation¶
- 1. Table of Contents
- 1.1 Data Background
- 1.2 Data Description
- 1.3 Data Quality Assessment
- 1.4 Data Preprocessing
- 1.5 Data Exploration
- 1.6 Predictive Model Development
- 1.6.1 Pre-Modelling Data Preparation
- 1.6.2 Data Splitting
- 1.6.3 Convolutional Neural Network Sequential Layer Development
- 1.6.4 CNN With No Regularization Model Fitting | Hyperparameter Tuning | Validation
- 1.6.5 CNN With Dropout Regularization Model Fitting | Hyperparameter Tuning | Validation
- 1.6.6 CNN With Batch Normalization Regularization Model Fitting | Hyperparameter Tuning | Validation
- 1.6.7 CNN With Dropout and Batch Normalization Regularization Model Fitting | Hyperparameter Tuning | Validation
- 1.6.8 Model Selection
- 1.6.9 Model Testing
- 1.6.10 Model Inference
- 1.7 Predictive Model Deployment Using Streamlit and Streamlit Community Cloud
- 2. Summary
- 3. References
1. Table of Contents ¶
1.1 Data Background ¶
1.2 Data Description ¶
In [1]:
##################################
# Loading Python Libraries
##################################
##################################
# Data Loading, Data Preprocessing
# and Exploratory Data Analysis
##################################
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import matplotlib.cm as cm
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
%matplotlib inline
import tensorflow as tf
import keras
from PIL import Image
from glob import glob
import cv2
import os
import random
import math
##################################
# Model Development
##################################
from keras import backend as K
from keras import regularizers
from keras.models import Sequential, Model,load_model
from keras.layers import Input, Activation, Dense, Dropout, Flatten, Conv2D, MaxPooling2D, MaxPool2D, AveragePooling2D, GlobalMaxPooling2D, BatchNormalization
from keras.optimizers import Adam, SGD
from keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.utils import img_to_array, array_to_img, load_img
from math import ceil
##################################
# Model Evaluation
##################################
from keras.metrics import PrecisionAtRecall, Recall
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_recall_fscore_support, accuracy_score
In [2]:
##################################
# Setting random seed options
# for the analysis
##################################
def set_seed(seed=123):
np.random.seed(seed)
tf.random.set_seed(seed)
keras.utils.set_random_seed(seed)
random.seed(seed)
tf.config.experimental.enable_op_determinism()
os.environ['TF_DETERMINISTIC_OPS'] = "1"
os.environ['TF_CUDNN_DETERMINISM'] = "1"
os.environ['PYTHONHASHSEED'] = str(seed)
set_seed()
In [3]:
##################################
# Filtering out unncessary warnings
##################################
import warnings
warnings.filterwarnings('ignore')
In [4]:
##################################
# Defining file paths
##################################
DATASETS_ORIGINAL_PATH = r"datasets\Brain_Tumor_MRI_Dataset"
DATASETS_FINAL_TRAIN_PATH = r"datasets\Brain_Tumor_MRI_Dataset\Training"
DATASETS_FINAL_TEST_PATH = r"datasets\Brain_Tumor_MRI_Dataset\Testing"
MODELS_PATH = r"models"
PARAMETERS_PATH = r"parameters"
PIPELINES_PATH = r"pipelines"
In [5]:
##################################
# Defining the image category levels
##################################
diagnosis_code_dictionary = {'Tr-no': 0,
'Tr-noTr': 0,
'Tr-gl': 1,
'Tr-glTr': 1,
'Tr-me': 2,
'Tr-meTr': 2,
'Tr-pi': 3,
'Tr-piTr': 3}
##################################
# Defining the image category descriptions
##################################
diagnosis_description_dictionary = {'Tr-no': 'No Tumor',
'Tr-noTr': 'No Tumor',
'Tr-gl': 'Glioma',
'Tr-glTr': 'Glioma',
'Tr-me': 'Meningioma',
'Tr-meTr': 'Meningioma',
'Tr-pi': 'Pituitary',
'Tr-piTr': 'Pituitary'}
##################################
# Consolidating the image path
##################################
imageid_path_dictionary = {os.path.splitext(os.path.basename(x))[0]: x for x in glob(os.path.join("..", DATASETS_FINAL_TRAIN_PATH, '*','*.jpg'))}
In [6]:
##################################
# Taking a snapshot of the dictionary
##################################
dict(list(imageid_path_dictionary.items())[0:5])
Out[6]:
{'Tr-glTr_0000': '..\\datasets\\Brain_Tumor_MRI_Dataset\\Training\\glioma\\Tr-glTr_0000.jpg',
'Tr-glTr_0001': '..\\datasets\\Brain_Tumor_MRI_Dataset\\Training\\glioma\\Tr-glTr_0001.jpg',
'Tr-glTr_0002': '..\\datasets\\Brain_Tumor_MRI_Dataset\\Training\\glioma\\Tr-glTr_0002.jpg',
'Tr-glTr_0003': '..\\datasets\\Brain_Tumor_MRI_Dataset\\Training\\glioma\\Tr-glTr_0003.jpg',
'Tr-glTr_0004': '..\\datasets\\Brain_Tumor_MRI_Dataset\\Training\\glioma\\Tr-glTr_0004.jpg'}
In [7]:
##################################
# Consolidating the information
# from the dataset
# into a dataframe
##################################
mri_images = pd.DataFrame.from_dict(imageid_path_dictionary, orient = 'index').reset_index()
mri_images.columns = ['Image_ID','Path']
classes = mri_images.Image_ID.str.split('_').str[0]
mri_images['Diagnosis'] = classes
mri_images['Target'] = mri_images['Diagnosis'].map(diagnosis_code_dictionary.get)
mri_images['Class'] = mri_images['Diagnosis'].map(diagnosis_description_dictionary.get)
In [8]:
##################################
# Performing a general exploration of the dataset
##################################
print('Dataset Dimensions: ')
display(mri_images.shape)
Dataset Dimensions:
(5712, 5)
In [9]:
##################################
# Listing the column names and data types
##################################
print('Column Names and Data Types:')
display(mri_images.dtypes)
Column Names and Data Types:
Image_ID object Path object Diagnosis object Target int64 Class object dtype: object
In [10]:
##################################
# Taking a snapshot of the dataset
##################################
mri_images.head()
Out[10]:
| Image_ID | Path | Diagnosis | Target | Class | |
|---|---|---|---|---|---|
| 0 | Tr-glTr_0000 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\g... | Tr-glTr | 1 | Glioma |
| 1 | Tr-glTr_0001 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\g... | Tr-glTr | 1 | Glioma |
| 2 | Tr-glTr_0002 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\g... | Tr-glTr | 1 | Glioma |
| 3 | Tr-glTr_0003 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\g... | Tr-glTr | 1 | Glioma |
| 4 | Tr-glTr_0004 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\g... | Tr-glTr | 1 | Glioma |
In [11]:
##################################
# Performing a general exploration of the numeric variables
##################################
print('Numeric Variable Summary:')
display(mri_images.describe(include='number').transpose())
Numeric Variable Summary:
| count | mean | std | min | 25% | 50% | 75% | max | |
|---|---|---|---|---|---|---|---|---|
| Target | 5712.0 | 1.465336 | 1.147892 | 0.0 | 0.0 | 1.0 | 3.0 | 3.0 |
In [12]:
##################################
# Performing a general exploration of the object variable
##################################
print('Object Variable Summary:')
display(mri_images.describe(include='object').transpose())
Object Variable Summary:
| count | unique | top | freq | |
|---|---|---|---|---|
| Image_ID | 5712 | 5712 | Tr-pi_1440 | 1 |
| Path | 5712 | 5712 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\p... | 1 |
| Diagnosis | 5712 | 8 | Tr-no | 1585 |
| Class | 5712 | 4 | No Tumor | 1595 |
In [13]:
##################################
# Performing a general exploration of the target variable
##################################
mri_images.Class.value_counts()
Out[13]:
Class No Tumor 1595 Pituitary 1457 Meningioma 1339 Glioma 1321 Name: count, dtype: int64
In [14]:
##################################
# Performing a general exploration of the target variable
##################################
mri_images.Class.value_counts(normalize=True)
Out[14]:
Class No Tumor 0.279237 Pituitary 0.255077 Meningioma 0.234419 Glioma 0.231268 Name: proportion, dtype: float64
1.3 Data Quality Assessment ¶
In [15]:
##################################
# Counting the number of duplicated images
##################################
mri_images.duplicated().sum()
Out[15]:
np.int64(0)
In [16]:
##################################
# Gathering the number of null images
##################################
mri_images.isnull().sum()
Out[16]:
Image_ID 0 Path 0 Diagnosis 0 Target 0 Class 0 dtype: int64
1.4 Data Preprocessing ¶
In [17]:
##################################
# Including the pixel information
# of the actual images
# in array format
# into a dataframe
##################################
mri_images['Image'] = mri_images['Path'].map(lambda x: np.asarray(Image.open(x).resize((200,200))))
In [18]:
##################################
# Listing the column names and data types
##################################
print('Column Names and Data Types:')
display(mri_images.dtypes)
Column Names and Data Types:
Image_ID object Path object Diagnosis object Target int64 Class object Image object dtype: object
In [19]:
##################################
# Taking a snapshot of the dataset
##################################
mri_images.head()
Out[19]:
| Image_ID | Path | Diagnosis | Target | Class | Image | |
|---|---|---|---|---|---|---|
| 0 | Tr-glTr_0000 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\g... | Tr-glTr | 1 | Glioma | [[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], ... |
| 1 | Tr-glTr_0001 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\g... | Tr-glTr | 1 | Glioma | [[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], ... |
| 2 | Tr-glTr_0002 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\g... | Tr-glTr | 1 | Glioma | [[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], ... |
| 3 | Tr-glTr_0003 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\g... | Tr-glTr | 1 | Glioma | [[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], ... |
| 4 | Tr-glTr_0004 | ..\datasets\Brain_Tumor_MRI_Dataset\Training\g... | Tr-glTr | 1 | Glioma | [[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], ... |
In [20]:
##################################
# Taking a snapshot of the dataset
##################################
n_samples = 5
fig, m_axs = plt.subplots(4, n_samples, figsize = (2*n_samples, 10))
for n_axs, (type_name, type_rows) in zip(m_axs, mri_images.sort_values(['Class']).groupby('Class')):
n_axs[2].set_title(type_name, fontsize = 14, weight = 'bold')
for c_ax, (_, c_row) in zip(n_axs, type_rows.sample(n_samples, random_state=123).iterrows()):
picture = c_row['Path']
image = cv2.imread(picture)
resized_image = cv2.resize(image, (500,500))
c_ax.imshow(resized_image)
c_ax.axis('off')
In [21]:
##################################
# Sampling a single image
##################################
samples, features = mri_images.shape
plt.figure()
pic_id = random.randrange(0, samples)
picture = mri_images['Path'][pic_id]
image = cv2.imread(picture)
<Figure size 640x480 with 0 Axes>
In [22]:
##################################
# Plotting using subplots
##################################
plt.figure(figsize=(15, 5))
##################################
# Formulating the original image
##################################
plt.subplot(1, 4, 1)
plt.imshow(image)
plt.title('Original Image', fontsize = 14, weight = 'bold')
plt.axis('off')
##################################
# Formulating the blue channel
##################################
plt.subplot(1, 4, 2)
plt.imshow(image[ : , : , 0])
plt.title('Blue Channel', fontsize = 14, weight = 'bold')
plt.axis('off')
##################################
# Formulating the green channel
##################################
plt.subplot(1, 4, 3)
plt.imshow(image[ : , : , 1])
plt.title('Green Channel', fontsize = 14, weight = 'bold')
plt.axis('off')
##################################
# Formulating the red channel
##################################
plt.subplot(1, 4, 4)
plt.imshow(image[ : , : , 2])
plt.title('Red Channel', fontsize = 14, weight = 'bold')
plt.axis('off')
##################################
# Consolidating all images
##################################
plt.show()
In [23]:
##################################
# Determining the image shape
##################################
print('Image Shape:')
display(image.shape)
Image Shape:
(512, 512, 3)
In [24]:
##################################
# Determining the image height
##################################
print('Image Height:')
display(image.shape[0])
Image Height:
512
In [25]:
##################################
# Determining the image width
##################################
print('Image Width:')
display(image.shape[1])
Image Width:
512
In [26]:
##################################
# Determining the image dimension
##################################
print('Image Dimension:')
display(image.ndim)
Image Dimension:
3
In [27]:
##################################
# Determining the image size
##################################
print('Image Size:')
display(image.size)
Image Size:
786432
In [28]:
##################################
# Determining the image data type
##################################
print('Image Data Type:')
display(image.dtype)
Image Data Type:
dtype('uint8')
In [29]:
##################################
# Determining the maximum RGB value
##################################
print('Image Maximum RGB:')
display(image.max())
Image Maximum RGB:
np.uint8(255)
In [30]:
##################################
# Determining the minimum RGB value
##################################
print('Image Minimum RGB:')
display(image.min())
Image Minimum RGB:
np.uint8(0)
In [31]:
##################################
# Identifying the path for the images
# and defining image categories
##################################
path = (os.path.join("..", DATASETS_FINAL_TRAIN_PATH))
classes=["notumor", "glioma", "meningioma", "pituitary"]
num_classes = len(classes)
batch_size = 32
In [32]:
##################################
# Creating subsets of images
# for model training and
# setting the parameters for
# real-time data augmentation
# at each epoch
##################################
set_seed()
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=2,
width_shift_range=0.02,
height_shift_range=0.02,
horizontal_flip=False,
vertical_flip=False,
shear_range=0.02,
zoom_range=0.02,
validation_split=0.2)
##################################
# Loading the model training images
##################################
train_gen = train_datagen.flow_from_directory(directory=path,
target_size=(227, 227),
class_mode='categorical',
subset='training',
shuffle=True,
classes=classes,
batch_size=batch_size,
color_mode="grayscale")
Found 4571 images belonging to 4 classes.
In [33]:
##################################
# Loading samples of augmented images
# for the training set
##################################
##################################
# Loading samples of augmented images
# for the training set
##################################
fig, axes = plt.subplots(1, 5, figsize=(15, 3))
for i in range(5):
batch = next(train_gen)
images, labels = batch
axes[i].imshow(images[0])
axes[i].set_title(f"Label: {labels[0]}")
axes[i].axis('off')
plt.show()
In [34]:
##################################
# Creating subsets of images
# for model validation
# setting the parameters for
# real-time data augmentation
# at each epoch
##################################
set_seed()
val_datagen = ImageDataGenerator(rescale=1./255,
validation_split=0.2)
##################################
# Loading the model evaluation images
##################################
val_gen = val_datagen.flow_from_directory(directory=path,
target_size=(227, 227),
class_mode='categorical',
subset='validation',
shuffle=False,
classes=classes,
batch_size=batch_size,
color_mode="grayscale")
Found 1141 images belonging to 4 classes.
In [35]:
##################################
# Loading samples of original images
# for the validation set
##################################
images, labels = next(val_gen)
fig, axes = plt.subplots(1, 5, figsize=(15, 3))
for i, idx in enumerate(range(0, 5)):
axes[i].imshow(images[idx])
axes[i].set_title(f"Label: {labels[0]}")
axes[i].axis('off')
plt.show()
1.5 Data Exploration ¶
1.5.1 Exploratory Data Analysis ¶
In [36]:
##################################
# Consolidating summary statistics
# for the image pixel values
##################################
mean_val = []
std_dev_val = []
max_val = []
min_val = []
for i in range(0, samples):
mean_val.append(mri_images['Image'][i].mean())
std_dev_val.append(np.std(mri_images['Image'][i]))
max_val.append(mri_images['Image'][i].max())
min_val.append(mri_images['Image'][i].min())
imageEDA = mri_images.loc[:,['Image', 'Class','Path']]
imageEDA['Mean'] = mean_val
imageEDA['StDev'] = std_dev_val
imageEDA['Max'] = max_val
imageEDA['Min'] = min_val
In [37]:
##################################
# Consolidating the overall mean
# for the pixel intensity means
# grouped by categories
##################################
imageEDA.groupby(['Class'])['Mean'].mean()
Out[37]:
Class Glioma 32.716871 Meningioma 43.487954 No Tumor 60.815724 Pituitary 49.273456 Name: Mean, dtype: float64
In [38]:
##################################
# Consolidating the overall minimum
# for the pixel intensity means
# grouped by categories
##################################
imageEDA.groupby(['Class'])['Mean'].min()
Out[38]:
Class Glioma 13.701850 Meningioma 18.233400 No Tumor 9.770775 Pituitary 24.699575 Name: Mean, dtype: float64
In [39]:
##################################
# Consolidating the overall maximum
# for the pixel intensity means
# grouped by categories
##################################
imageEDA.groupby(['Class'])['Mean'].max()
Out[39]:
Class Glioma 68.372425 Meningioma 137.765375 No Tumor 125.066725 Pituitary 102.007950 Name: Mean, dtype: float64
In [40]:
##################################
# Consolidating the overall standard deviation
# for the pixel intensity means
# grouped by categories
##################################
imageEDA.groupby(['Class'])['Mean'].std()
Out[40]:
Class Glioma 8.565834 Meningioma 14.307165 No Tumor 21.338225 Pituitary 8.222902 Name: Mean, dtype: float64
In [41]:
##################################
# Formulating the mean distribution
# by category of the image pixel values
##################################
sns.displot(data = imageEDA, x = 'Mean', kind="kde", hue = 'Class', height=6, aspect=1.40)
plt.title('Image Pixel Intensity Mean Distribution by Category', fontsize=14, weight='bold');
In [42]:
##################################
# Formulating the maximum distribution
# by category of the image pixel values
##################################
sns.displot(data = imageEDA, x = 'Max', kind="kde", hue = 'Class', height=6, aspect=1.40)
plt.title('Image Pixel Intensity Maximum Distribution by Category', fontsize=14, weight='bold');
In [43]:
##################################
# Formulating the minimum distribution
# by category of the image pixel values
##################################
sns.displot(data = imageEDA, x = 'Min', kind="kde", hue = 'Class', height=6, aspect=1.40)
plt.title('Image Pixel Intensity Minimum Distribution by Category', fontsize=14, weight='bold');
In [44]:
##################################
# Formulating the standard deviation distribution
# by category of the image pixel values
##################################
sns.displot(data = imageEDA, x = 'StDev', kind="kde", hue = 'Class', height=6, aspect=1.40)
plt.title('Image Pixel Intensity Standard Deviation Distribution by Category', fontsize=14, weight='bold');
In [45]:
##################################
# Formulating the mean and standard deviation
# scatterplot distribution
# by category of the image pixel values
##################################
plt.figure(figsize=(10,6))
sns.set(style="ticks", font_scale = 1)
ax = sns.scatterplot(data=imageEDA, x="Mean", y=imageEDA['StDev'], hue='Class', alpha=0.5)
sns.despine(top=True, right=True, left=False, bottom=False)
plt.xticks(rotation=0, fontsize = 12)
ax.set_xlabel('Image Pixel Intensity Mean',fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
plt.title('Image Pixel Intensity Mean and Standard Deviation Distribution', fontsize = 14, weight='bold');
In [46]:
##################################
# Formulating the mean and standard deviation
# scatterplot distribution
# by category of the image pixel values
##################################
scatterplot = sns.FacetGrid(imageEDA, col="Class", height=6)
scatterplot.map_dataframe(sns.scatterplot, x='Mean', y='StDev', alpha=0.5)
scatterplot.set_titles(col_template="{col_name}", row_template="{row_name}", size=18)
scatterplot.fig.subplots_adjust(top=.8)
scatterplot.fig.suptitle('Image Pixel Intensity Mean and Standard Deviation Distribution', fontsize=14, weight='bold')
axes = scatterplot.axes.flatten()
axes[0].set_ylabel('Image Pixel Intensity Standard Deviation')
for ax in axes:
ax.set_xlabel('Image Pixel Intensity Mean')
scatterplot.fig.tight_layout()
In [47]:
##################################
# Formulating the mean and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
##################################
def getImage(path):
image = cv2.imread(path)
resized_image = cv2.resize(image, (300,300))
return OffsetImage(resized_image, zoom = 0.1)
DF_sample = imageEDA.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Mean", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Mean', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(-5,145)
ax.set_ylim(0,120)
plt.title('Overall: Image Pixel Intensity Mean and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path in zip(DF_sample['Mean'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path), (x0, y0), frameon=False)
ax.add_artist(ab)
In [48]:
##################################
# Formulating the mean and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the Glioma class
##################################
path_glioma = (os.path.join("..", DATASETS_FINAL_TRAIN_PATH,'glioma/'))
imageEDA_glioma = imageEDA.loc[imageEDA['Class'] == 'Glioma']
DF_sample = imageEDA_glioma.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Mean", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Mean', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(-5,145)
ax.set_ylim(10,110)
plt.title('Glioma: Image Pixel Intensity Mean and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_glioma in zip(DF_sample['Mean'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_glioma), (x0, y0), frameon=False)
ax.add_artist(ab)
In [49]:
##################################
# Formulating the mean and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the Viral Pneumonia class
##################################
path_meningioma = (os.path.join("..", DATASETS_FINAL_TRAIN_PATH,'meningioma/'))
imageEDA_meningioma = imageEDA.loc[imageEDA['Class'] == 'Meningioma']
DF_sample = imageEDA_meningioma.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Mean", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Mean', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(-5,145)
ax.set_ylim(10,110)
plt.title('Meningioma: Image Pixel Intensity Mean and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_meningioma in zip(DF_sample['Mean'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_meningioma), (x0, y0), frameon=False)
ax.add_artist(ab)
In [50]:
##################################
# Formulating the mean and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the Pituitary class
##################################
path_pituitary = (os.path.join("..", DATASETS_FINAL_TRAIN_PATH,'pituitary/'))
imageEDA_pituitary = imageEDA.loc[imageEDA['Class'] == 'Pituitary']
DF_sample = imageEDA_pituitary.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Mean", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Mean', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(0, 140)
ax.set_ylim(10,110)
plt.title('Pituitary: Image Pixel Intensity Mean and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_pituitary in zip(DF_sample['Mean'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_pituitary), (x0, y0), frameon=False)
ax.add_artist(ab)
In [51]:
##################################
# Formulating the mean and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the No Tumor class
##################################
path_no_tumor = (os.path.join("..", DATASETS_FINAL_TRAIN_PATH,'notumor/'))
imageEDA_no_tumor = imageEDA.loc[imageEDA['Class'] == 'No Tumor']
DF_sample = imageEDA_no_tumor.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Mean", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Mean', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(-5,145)
ax.set_ylim(10,110)
plt.title('No Tumor: Image Pixel Intensity Mean and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_no_tumor in zip(DF_sample['Mean'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_no_tumor), (x0, y0), frameon=False)
ax.add_artist(ab)
In [52]:
#################################
# Formulating the minimum and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
##################################
DF_sample = imageEDA.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Min", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Minimum', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(-5,145)
ax.set_ylim(0,120)
plt.title('Overall: Image Pixel Intensity Minimum and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path in zip(DF_sample['Min'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path), (x0, y0), frameon=False)
ax.add_artist(ab)
In [53]:
##################################
# Formulating the minimum and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the Glioma class
##################################
DF_sample = imageEDA_glioma.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Min", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Minimum', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(-5,145)
ax.set_ylim(10,110)
plt.title('Glioma: Image Pixel Intensity Minimum and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_glioma in zip(DF_sample['Min'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_glioma), (x0, y0), frameon=False)
ax.add_artist(ab)
In [54]:
##################################
# Formulating the minimum and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the Meningioma class
##################################
DF_sample = imageEDA_meningioma.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Min", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Minimum', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(-5,145)
ax.set_ylim(10,110)
plt.title('Meningioma: Image Pixel Intensity Minimum and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_meningioma in zip(DF_sample['Min'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_meningioma), (x0, y0), frameon=False)
ax.add_artist(ab)
In [55]:
##################################
# Formulating the minimum and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the Pituitary class
##################################
DF_sample = imageEDA_pituitary.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Min", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Minimum', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(-5,145)
ax.set_ylim(10,110)
plt.title('Pituitary: Image Pixel Intensity Minimum and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_pituitary in zip(DF_sample['Min'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_pituitary), (x0, y0), frameon=False)
ax.add_artist(ab)
In [56]:
##################################
# Formulating the minimum and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the No Tumor class
##################################
DF_sample = imageEDA_no_tumor.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Min", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Minimum', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(-5,145)
ax.set_ylim(10,110)
plt.title('No Tumor: Image Pixel Intensity Minimum and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_no_tumor in zip(DF_sample['Min'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_no_tumor), (x0, y0), frameon=False)
ax.add_artist(ab)
In [57]:
#################################
# Formulating the maximum and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
##################################
DF_sample = imageEDA.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Max", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Maximum', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(115,265)
ax.set_ylim(0,120)
plt.title('Overall: Image Pixel Intensity Maximum and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path in zip(DF_sample['Max'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path), (x0, y0), frameon=False)
ax.add_artist(ab)
In [58]:
##################################
# Formulating the maximum and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the Glioma class
##################################
DF_sample = imageEDA_glioma.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Max", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Maximum', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(115,265)
ax.set_ylim(10,110)
plt.title('Glioma: Image Pixel Intensity Maximum and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_glioma in zip(DF_sample['Max'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_glioma), (x0, y0), frameon=False)
ax.add_artist(ab)
In [59]:
##################################
# Formulating the maximum and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the Meningioma class
##################################
DF_sample = imageEDA_meningioma.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Max", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Maximum', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(115,265)
ax.set_ylim(10,110)
plt.title('Meningioma: Image Pixel Intensity Maximum and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_meningioma in zip(DF_sample['Max'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_meningioma), (x0, y0), frameon=False)
ax.add_artist(ab)
In [60]:
##################################
# Formulating the maximum and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the Pituitary class
##################################
DF_sample = imageEDA_pituitary.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Max", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Maximum', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(115,265)
ax.set_ylim(10,110)
plt.title('Pituitary: Image Pixel Intensity Maximum and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_pituitary in zip(DF_sample['Max'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_pituitary), (x0, y0), frameon=False)
ax.add_artist(ab)
In [61]:
##################################
# Formulating the maximum and standard deviation
# scatterplot distribution
# of the image pixel values
# represented as actual images
# for the No Tumor class
##################################
DF_sample = imageEDA_no_tumor.sample(frac=1.0, replace=False, random_state=123)
paths = DF_sample['Path']
fig, ax = plt.subplots(figsize=(15,9))
ab = sns.scatterplot(data=DF_sample, x="Max", y='StDev')
sns.despine(top=True, right=True, left=False, bottom=False)
ax.set_xlabel('Image Pixel Intensity Maximum', fontsize=14, weight='bold')
ax.set_ylabel('Image Pixel Intensity Standard Deviation', fontsize=14, weight='bold')
ax.set_xlim(115,265)
ax.set_ylim(10,110)
plt.title('No Tumor: Image Pixel Intensity Maximum and Standard Deviation Distribution', fontsize=14, weight='bold');
for x0, y0, path_no_tumor in zip(DF_sample['Max'], DF_sample['StDev'], paths):
ab = AnnotationBbox(getImage(path_no_tumor), (x0, y0), frameon=False)
ax.add_artist(ab)
1.5.2 Hypothesis Testing ¶
1.6 Predictive Model Development ¶
1.6.1 Pre-Modelling Data Preparation ¶
1.6.2 Data Splitting ¶
1.6.3 Convolutional Neural Network Sequential Layer Development ¶
In [62]:
##################################
# Defining a function for
# plotting the loss profile
# of the training and validation sets
#################################
def plot_training_history(history, model_name):
plt.figure(figsize=(12, 8))
# Plotting training and validation loss
plt.subplot(2, 1, 1) # First subplot for loss
plt.plot(history.history['loss'], label='Train Loss', color='blue')
plt.plot(history.history['val_loss'], label='Validation Loss', color='orange')
plt.title(f'{model_name} Training and Validation Loss', fontsize=16, weight='bold', pad=20)
plt.ylim(-0.2, 2.2)
plt.yticks([x * 0.50 for x in range(0, 5)])
plt.xlim(-1, 21)
plt.xticks([x for x in range(0, 21)])
plt.xlabel('Epoch', fontsize=14, weight='bold')
plt.ylabel('Loss', fontsize=14, weight='bold')
plt.legend(loc='upper right')
plt.grid(True)
# Plotting training and validation recall
plt.subplot(2, 1, 2) # Second subplot for recall
plt.plot(history.history['recall'], label='Train Recall', color='green')
plt.plot(history.history['val_recall'], label='Validation Recall', color='red')
plt.title(f'{model_name} Training and Validation Recall', fontsize=16, weight='bold', pad=20)
plt.ylim(-0.1, 1.1)
plt.yticks([x * 0.25 for x in range(0, 5)])
plt.xlim(-1, 21)
plt.xticks([x for x in range(0, 21)])
plt.xlabel('Epoch', fontsize=14, weight='bold')
plt.ylabel('Recall', fontsize=14, weight='bold')
plt.legend(loc='lower right')
plt.grid(True)
# Adjusting layout and show the plots
plt.tight_layout(pad=2.0)
plt.show()
In [63]:
##################################
# Defining the model file paths
#################################
NR_SIMPLE_BEST_MODEL_PATH = os.path.join("..", MODELS_PATH, "nr_simple_best_model.keras")
DR_SIMPLE_BEST_MODEL_PATH = os.path.join("..", MODELS_PATH, "dr_simple_best_model.keras")
BNR_SIMPLE_BEST_MODEL_PATH = os.path.join("..", MODELS_PATH, "bnr_simple_best_model.keras")
CDRBNR_SIMPLE_BEST_MODEL_PATH = os.path.join("..", MODELS_PATH, "cdrbnr_simple_best_model.keras")
NR_COMPLEX_BEST_MODEL_PATH = os.path.join("..", MODELS_PATH, "nr_complex_best_model.keras")
DR_COMPLEX_BEST_MODEL_PATH = os.path.join("..", MODELS_PATH, "dr_complex_best_model.keras")
BNR_COMPLEX_BEST_MODEL_PATH = os.path.join("..", MODELS_PATH, "bnr_complex_best_model.keras")
CDRBNR_COMPLEX_BEST_MODEL_PATH = os.path.join("..", MODELS_PATH, "cdrbnr_complex_best_model.keras")
In [64]:
##################################
# Defining the model callback configuration
# for model training
#################################
early_stopping = EarlyStopping(
monitor='val_loss', # Defining the metric to monitor
patience=10, # Defining the number of epochs to wait before stopping if no improvement
min_delta=1e-4 , # Defining the minimum change in the monitored quantity to qualify as an improvement
restore_best_weights=True # Restoring the weights from the best epoch
)
reduce_lr = ReduceLROnPlateau(
monitor='val_loss', # Defining the metric to monitor
factor=0.1, # Reducing the learning rate by a factor of 10%
patience=3, # Defining the number of epochs to wait before reducing learning rate
min_lr=1e-6 # Defining the lower bound on the learning rate
)
nr_simple_model_checkpoint = ModelCheckpoint(
filepath=NR_SIMPLE_BEST_MODEL_PATH, # Defining the file path for saving
monitor='val_loss', # Defining the metric to monitor
save_best_only=True, # Saving only the best model
save_weights_only=False, # Saving the entire model, not just weights
)
dr_simple_model_checkpoint = ModelCheckpoint(
filepath=DR_SIMPLE_BEST_MODEL_PATH, # Defining the file path for saving
monitor='val_loss', # Defining the metric to monitor
save_best_only=True, # Saving only the best model
save_weights_only=False, # Saving the entire model, not just weights
)
bnr_simple_model_checkpoint = ModelCheckpoint(
filepath=BNR_SIMPLE_BEST_MODEL_PATH, # Defining the file path for saving
monitor='val_loss', # Defining the metric to monitor
save_best_only=True, # Saving only the best model
save_weights_only=False, # Saving the entire model, not just weights
)
cdrbnr_simple_model_checkpoint = ModelCheckpoint(
filepath=CDRBNR_SIMPLE_BEST_MODEL_PATH, # Defining the file path for saving
monitor='val_loss', # Defining the metric to monitor
save_best_only=True, # Saving only the best model
save_weights_only=False, # Saving the entire model, not just weights
)
nr_complex_model_checkpoint = ModelCheckpoint(
filepath=NR_COMPLEX_BEST_MODEL_PATH, # Defining the file path for saving
monitor='val_loss', # Defining the metric to monitor
save_best_only=True, # Saving only the best model
save_weights_only=False, # Saving the entire model, not just weights
)
dr_complex_model_checkpoint = ModelCheckpoint(
filepath=DR_COMPLEX_BEST_MODEL_PATH, # Defining the file path for saving
monitor='val_loss', # Defining the metric to monitor
save_best_only=True, # Saving only the best model
save_weights_only=False, # Saving the entire model, not just weights
)
bnr_complex_model_checkpoint = ModelCheckpoint(
filepath=BNR_COMPLEX_BEST_MODEL_PATH, # Defining the file path for saving
monitor='val_loss', # Defining the metric to monitor
save_best_only=True, # Saving only the best model
save_weights_only=False, # Saving the entire model, not just weights
)
cdrbnr_complex_model_checkpoint = ModelCheckpoint(
filepath=CDRBNR_COMPLEX_BEST_MODEL_PATH, # Defining the file path for saving
monitor='val_loss', # Defining the metric to monitor
save_best_only=True, # Saving only the best model
save_weights_only=False, # Saving the entire model, not just weights
)
1.6.3.1 CNN With No Regularization ¶
In [65]:
##################################
# Formulating the network architecture
# for a simple CNN with no regularization
##################################
set_seed()
batch_size = 32
model_nr_simple = Sequential(name="model_nr_simple")
model_nr_simple.add(Conv2D(filters=8, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="nr_simple_conv2d_0"))
model_nr_simple.add(MaxPooling2D(pool_size=(2, 2), name="nr_simple_max_pooling2d_0"))
model_nr_simple.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', name="nr_simple_conv2d_1"))
model_nr_simple.add(MaxPooling2D(pool_size=(2, 2), name="nr_simple_max_pooling2d_1"))
model_nr_simple.add(Flatten(name="nr_simple_flatten"))
model_nr_simple.add(Dense(units=32, activation='relu', name="nr_simple_dense_0"))
model_nr_simple.add(Dense(units=num_classes, activation='softmax', name="nr_simple_dense_1"))
In [66]:
##################################
# Displaying the model summary
# for a simple CNN with no regularization
##################################
print(model_nr_simple.summary())
Model: "model_nr_simple"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ nr_simple_conv2d_0 (Conv2D) │ (None, 227, 227, 8) │ 80 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_simple_max_pooling2d_0 │ (None, 113, 113, 8) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_simple_conv2d_1 (Conv2D) │ (None, 113, 113, 16) │ 1,168 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_simple_max_pooling2d_1 │ (None, 56, 56, 16) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_simple_flatten (Flatten) │ (None, 50176) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_simple_dense_0 (Dense) │ (None, 32) │ 1,605,664 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_simple_dense_1 (Dense) │ (None, 4) │ 132 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 1,607,044 (6.13 MB)
Trainable params: 1,607,044 (6.13 MB)
Non-trainable params: 0 (0.00 B)
None
In [67]:
##################################
# Displaying the model layers
# for a simple CNN with no regularization
##################################
model_nr_simple_layer_names = [layer.name for layer in model_nr_simple.layers]
print("Layer Names:", model_nr_simple_layer_names)
Layer Names: ['nr_simple_conv2d_0', 'nr_simple_max_pooling2d_0', 'nr_simple_conv2d_1', 'nr_simple_max_pooling2d_1', 'nr_simple_flatten', 'nr_simple_dense_0', 'nr_simple_dense_1']
In [68]:
##################################
# Displaying the number of weights
# for each model layer
# for a simple CNN with no regularization
##################################
for layer in model_nr_simple.layers:
if hasattr(layer, 'weights'):
print(f"Layer: {layer.name}, Number of Weights: {len(layer.get_weights())}")
Layer: nr_simple_conv2d_0, Number of Weights: 2 Layer: nr_simple_max_pooling2d_0, Number of Weights: 0 Layer: nr_simple_conv2d_1, Number of Weights: 2 Layer: nr_simple_max_pooling2d_1, Number of Weights: 0 Layer: nr_simple_flatten, Number of Weights: 0 Layer: nr_simple_dense_0, Number of Weights: 2 Layer: nr_simple_dense_1, Number of Weights: 2
In [69]:
##################################
# Displaying the number of parameters
# for each model layer
# for a simple CNN with no regularization
##################################
total_parameters = 0
for layer in model_nr_simple.layers:
layer_parameters = layer.count_params()
total_parameters += layer_parameters
print(f"Layer: {layer.name}, Parameters: {layer_parameters}")
print("\nTotal Parameters in the Model:", total_parameters)
Layer: nr_simple_conv2d_0, Parameters: 80 Layer: nr_simple_max_pooling2d_0, Parameters: 0 Layer: nr_simple_conv2d_1, Parameters: 1168 Layer: nr_simple_max_pooling2d_1, Parameters: 0 Layer: nr_simple_flatten, Parameters: 0 Layer: nr_simple_dense_0, Parameters: 1605664 Layer: nr_simple_dense_1, Parameters: 132 Total Parameters in the Model: 1607044
In [70]:
##################################
# Formulating the network architecture
# for a complex CNN with no regularization
##################################
set_seed()
batch_size = 32
model_nr_complex = Sequential(name="model_nr_complex")
model_nr_complex.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="nr_complex_conv2d_0"))
model_nr_complex.add(MaxPooling2D(pool_size=(2, 2), name="nr_complex_max_pooling2d_0"))
model_nr_complex.add(Conv2D(filters=32, kernel_size=(3, 3), padding = 'Same', activation='relu', name="nr_complex_conv2d_1"))
model_nr_complex.add(MaxPooling2D(pool_size=(2, 2), name="nr_complex_max_pooling2d_1"))
model_nr_complex.add(Conv2D(filters=64, kernel_size=(3, 3), padding = 'Same', activation='relu', name="nr_complex_conv2d_2"))
model_nr_complex.add(MaxPooling2D(pool_size=(2, 2), name="nr_complex_max_pooling2d_2"))
model_nr_complex.add(Flatten(name="nr_complex_flatten"))
model_nr_complex.add(Dense(units=128, activation='relu', name="nr_complex_dense_0"))
model_nr_complex.add(Dense(units=num_classes, activation='softmax', name="nr_complex_dense_1"))
In [71]:
##################################
# Displaying the model summary
# for a complex CNN with no regularization
##################################
print(model_nr_complex.summary())
Model: "model_nr_complex"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ nr_complex_conv2d_0 (Conv2D) │ (None, 227, 227, 16) │ 160 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_complex_max_pooling2d_0 │ (None, 113, 113, 16) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_complex_conv2d_1 (Conv2D) │ (None, 113, 113, 32) │ 4,640 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_complex_max_pooling2d_1 │ (None, 56, 56, 32) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_complex_conv2d_2 (Conv2D) │ (None, 56, 56, 64) │ 18,496 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_complex_max_pooling2d_2 │ (None, 28, 28, 64) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_complex_flatten (Flatten) │ (None, 50176) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_complex_dense_0 (Dense) │ (None, 128) │ 6,422,656 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ nr_complex_dense_1 (Dense) │ (None, 4) │ 516 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 6,446,468 (24.59 MB)
Trainable params: 6,446,468 (24.59 MB)
Non-trainable params: 0 (0.00 B)
None
In [72]:
##################################
# Displaying the model layers
# for a complex CNN with no regularization
##################################
model_nr_complex_layer_names = [layer.name for layer in model_nr_complex.layers]
print("Layer Names:", model_nr_complex_layer_names)
Layer Names: ['nr_complex_conv2d_0', 'nr_complex_max_pooling2d_0', 'nr_complex_conv2d_1', 'nr_complex_max_pooling2d_1', 'nr_complex_conv2d_2', 'nr_complex_max_pooling2d_2', 'nr_complex_flatten', 'nr_complex_dense_0', 'nr_complex_dense_1']
In [73]:
##################################
# Displaying the number of weights
# for each model layer
# for a complex CNN with no regularization
##################################
for layer in model_nr_complex.layers:
if hasattr(layer, 'weights'):
print(f"Layer: {layer.name}, Number of Weights: {len(layer.get_weights())}")
Layer: nr_complex_conv2d_0, Number of Weights: 2 Layer: nr_complex_max_pooling2d_0, Number of Weights: 0 Layer: nr_complex_conv2d_1, Number of Weights: 2 Layer: nr_complex_max_pooling2d_1, Number of Weights: 0 Layer: nr_complex_conv2d_2, Number of Weights: 2 Layer: nr_complex_max_pooling2d_2, Number of Weights: 0 Layer: nr_complex_flatten, Number of Weights: 0 Layer: nr_complex_dense_0, Number of Weights: 2 Layer: nr_complex_dense_1, Number of Weights: 2
In [74]:
##################################
# Displaying the number of parameters
# for each model layer
# for a complex CNN with no regularization
##################################
total_parameters = 0
for layer in model_nr_complex.layers:
layer_parameters = layer.count_params()
total_parameters += layer_parameters
print(f"Layer: {layer.name}, Parameters: {layer_parameters}")
print("\nTotal Parameters in the Model:", total_parameters)
Layer: nr_complex_conv2d_0, Parameters: 160 Layer: nr_complex_max_pooling2d_0, Parameters: 0 Layer: nr_complex_conv2d_1, Parameters: 4640 Layer: nr_complex_max_pooling2d_1, Parameters: 0 Layer: nr_complex_conv2d_2, Parameters: 18496 Layer: nr_complex_max_pooling2d_2, Parameters: 0 Layer: nr_complex_flatten, Parameters: 0 Layer: nr_complex_dense_0, Parameters: 6422656 Layer: nr_complex_dense_1, Parameters: 516 Total Parameters in the Model: 6446468
1.6.3.2 CNN With Dropout Regularization ¶
In [75]:
##################################
# Formulating the network architecture
# for a simple CNN with dropout regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_dr_simple = Sequential(name="model_dr_simple")
model_dr_simple.add(Conv2D(filters=8, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="dr_simple_conv2d_0"))
model_dr_simple.add(MaxPooling2D(pool_size=(2, 2), name="dr_simple_max_pooling2d_0"))
model_dr_simple.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', name="dr_simple_conv2d_1"))
model_dr_simple.add(MaxPooling2D(pool_size=(2, 2), name="dr_simple_max_pooling2d_1"))
model_dr_simple.add(Flatten(name="dr_simple_flatten"))
model_dr_simple.add(Dense(units=32, activation='relu', name="dr_simple_dense_0"))
model_dr_simple.add(Dropout(rate=0.30, name="dr_simple_dropout"))
model_dr_simple.add(Dense(units=num_classes, activation='softmax', name="dr_simple_dense_1"))
In [76]:
##################################
# Displaying the model summary
# for a simple CNN with dropout regularization
##################################
print(model_dr_simple.summary())
Model: "model_dr_simple"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ dr_simple_conv2d_0 (Conv2D) │ (None, 227, 227, 8) │ 80 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_simple_max_pooling2d_0 │ (None, 113, 113, 8) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_simple_conv2d_1 (Conv2D) │ (None, 113, 113, 16) │ 1,168 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_simple_max_pooling2d_1 │ (None, 56, 56, 16) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_simple_flatten (Flatten) │ (None, 50176) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_simple_dense_0 (Dense) │ (None, 32) │ 1,605,664 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_simple_dropout (Dropout) │ (None, 32) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_simple_dense_1 (Dense) │ (None, 4) │ 132 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 1,607,044 (6.13 MB)
Trainable params: 1,607,044 (6.13 MB)
Non-trainable params: 0 (0.00 B)
None
In [77]:
##################################
# Displaying the model layers
# for a simple CNN with dropout regularization
##################################
model_dr_simple_layer_names = [layer.name for layer in model_dr_simple.layers]
print("Layer Names:", model_dr_simple_layer_names)
Layer Names: ['dr_simple_conv2d_0', 'dr_simple_max_pooling2d_0', 'dr_simple_conv2d_1', 'dr_simple_max_pooling2d_1', 'dr_simple_flatten', 'dr_simple_dense_0', 'dr_simple_dropout', 'dr_simple_dense_1']
In [78]:
##################################
# Displaying the number of weights
# for each model layer
# for a simple CNN with dropout regularization
##################################
for layer in model_dr_simple.layers:
if hasattr(layer, 'weights'):
print(f"Layer: {layer.name}, Number of Weights: {len(layer.get_weights())}")
Layer: dr_simple_conv2d_0, Number of Weights: 2 Layer: dr_simple_max_pooling2d_0, Number of Weights: 0 Layer: dr_simple_conv2d_1, Number of Weights: 2 Layer: dr_simple_max_pooling2d_1, Number of Weights: 0 Layer: dr_simple_flatten, Number of Weights: 0 Layer: dr_simple_dense_0, Number of Weights: 2 Layer: dr_simple_dropout, Number of Weights: 0 Layer: dr_simple_dense_1, Number of Weights: 2
In [79]:
##################################
# Displaying the number of parameters
# for each model layer
# for a simple CNN with dropout regularization
##################################
total_parameters = 0
for layer in model_dr_simple.layers:
layer_parameters = layer.count_params()
total_parameters += layer_parameters
print(f"Layer: {layer.name}, Parameters: {layer_parameters}")
print("\nTotal Parameters in the Model:", total_parameters)
Layer: dr_simple_conv2d_0, Parameters: 80 Layer: dr_simple_max_pooling2d_0, Parameters: 0 Layer: dr_simple_conv2d_1, Parameters: 1168 Layer: dr_simple_max_pooling2d_1, Parameters: 0 Layer: dr_simple_flatten, Parameters: 0 Layer: dr_simple_dense_0, Parameters: 1605664 Layer: dr_simple_dropout, Parameters: 0 Layer: dr_simple_dense_1, Parameters: 132 Total Parameters in the Model: 1607044
In [80]:
##################################
# Formulating the network architecture
# for a complex CNN with dropout regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_dr_complex = Sequential(name="model_dr_complex")
model_dr_complex.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="dr_complex_conv2d_0"))
model_dr_complex.add(MaxPooling2D(pool_size=(2, 2), name="dr_complex_max_pooling2d_0"))
model_dr_complex.add(Conv2D(filters=32, kernel_size=(3, 3), padding = 'Same', activation='relu', name="dr_complex_conv2d_1"))
model_dr_complex.add(MaxPooling2D(pool_size=(2, 2), name="dr_complex_max_pooling2d_1"))
model_dr_complex.add(Conv2D(filters=64, kernel_size=(3, 3), padding = 'Same', activation='relu', name="dr_complex_conv2d_2"))
model_dr_complex.add(MaxPooling2D(pool_size=(2, 2), name="dr_complex_max_pooling2d_2"))
model_dr_complex.add(Flatten(name="dr_complex_flatten"))
model_dr_complex.add(Dense(units=128, activation='relu', name="dr_complex_dense_0"))
model_dr_complex.add(Dropout(rate=0.30, name="dr_complex_dropout"))
model_dr_complex.add(Dense(units=num_classes, activation='softmax', name="dr_complex_dense_1"))
In [81]:
##################################
# Displaying the model summary
# for a complex CNN with dropout regularization
##################################
print(model_dr_complex.summary())
Model: "model_dr_complex"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ dr_complex_conv2d_0 (Conv2D) │ (None, 227, 227, 16) │ 160 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_complex_max_pooling2d_0 │ (None, 113, 113, 16) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_complex_conv2d_1 (Conv2D) │ (None, 113, 113, 32) │ 4,640 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_complex_max_pooling2d_1 │ (None, 56, 56, 32) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_complex_conv2d_2 (Conv2D) │ (None, 56, 56, 64) │ 18,496 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_complex_max_pooling2d_2 │ (None, 28, 28, 64) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_complex_flatten (Flatten) │ (None, 50176) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_complex_dense_0 (Dense) │ (None, 128) │ 6,422,656 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_complex_dropout (Dropout) │ (None, 128) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dr_complex_dense_1 (Dense) │ (None, 4) │ 516 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 6,446,468 (24.59 MB)
Trainable params: 6,446,468 (24.59 MB)
Non-trainable params: 0 (0.00 B)
None
In [82]:
##################################
# Displaying the model layers
# for a complex CNN with dropout regularization
##################################
model_dr_complex_layer_names = [layer.name for layer in model_dr_complex.layers]
print("Layer Names:", model_dr_complex_layer_names)
Layer Names: ['dr_complex_conv2d_0', 'dr_complex_max_pooling2d_0', 'dr_complex_conv2d_1', 'dr_complex_max_pooling2d_1', 'dr_complex_conv2d_2', 'dr_complex_max_pooling2d_2', 'dr_complex_flatten', 'dr_complex_dense_0', 'dr_complex_dropout', 'dr_complex_dense_1']
In [83]:
##################################
# Displaying the number of weights
# for each model layer
# for a complex CNN with dropout regularization
##################################
for layer in model_dr_complex.layers:
if hasattr(layer, 'weights'):
print(f"Layer: {layer.name}, Number of Weights: {len(layer.get_weights())}")
Layer: dr_complex_conv2d_0, Number of Weights: 2 Layer: dr_complex_max_pooling2d_0, Number of Weights: 0 Layer: dr_complex_conv2d_1, Number of Weights: 2 Layer: dr_complex_max_pooling2d_1, Number of Weights: 0 Layer: dr_complex_conv2d_2, Number of Weights: 2 Layer: dr_complex_max_pooling2d_2, Number of Weights: 0 Layer: dr_complex_flatten, Number of Weights: 0 Layer: dr_complex_dense_0, Number of Weights: 2 Layer: dr_complex_dropout, Number of Weights: 0 Layer: dr_complex_dense_1, Number of Weights: 2
In [84]:
##################################
# Displaying the number of parameters
# for each model layer
# for a complex CNN with dropout regularization
##################################
total_parameters = 0
for layer in model_dr_complex.layers:
layer_parameters = layer.count_params()
total_parameters += layer_parameters
print(f"Layer: {layer.name}, Parameters: {layer_parameters}")
print("\nTotal Parameters in the Model:", total_parameters)
Layer: dr_complex_conv2d_0, Parameters: 160 Layer: dr_complex_max_pooling2d_0, Parameters: 0 Layer: dr_complex_conv2d_1, Parameters: 4640 Layer: dr_complex_max_pooling2d_1, Parameters: 0 Layer: dr_complex_conv2d_2, Parameters: 18496 Layer: dr_complex_max_pooling2d_2, Parameters: 0 Layer: dr_complex_flatten, Parameters: 0 Layer: dr_complex_dense_0, Parameters: 6422656 Layer: dr_complex_dropout, Parameters: 0 Layer: dr_complex_dense_1, Parameters: 516 Total Parameters in the Model: 6446468
1.6.3.3 CNN With Batch Normalization Regularization ¶
In [85]:
##################################
# Formulating the network architecture
# for a simple CNN with batch normalization regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_bnr_simple = Sequential(name="model_bnr_simple")
model_bnr_simple.add(Conv2D(filters=8, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="bnr_simple_conv2d_0"))
model_bnr_simple.add(MaxPooling2D(pool_size=(2, 2), name="bnr_simple_max_pooling2d_0"))
model_bnr_simple.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', name="bnr_simple_conv2d_1"))
model_bnr_simple.add(BatchNormalization(name="bnr_simple_batch_normalization"))
model_bnr_simple.add(Activation('relu', name="bnr_simple_activation"))
model_bnr_simple.add(MaxPooling2D(pool_size=(2, 2), name="bnr_simple_max_pooling2d_1"))
model_bnr_simple.add(Flatten(name="bnr_simple_flatten"))
model_bnr_simple.add(Dense(units=32, activation='relu', name="bnr_simple_dense_0"))
model_bnr_simple.add(Dense(units=num_classes, activation='softmax', name="bnr_simple_dense_1"))
In [86]:
##################################
# Displaying the model summary
# for a simple CNN with batch normalization regularization
##################################
print(model_bnr_simple.summary())
Model: "model_bnr_simple"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ bnr_simple_conv2d_0 (Conv2D) │ (None, 227, 227, 8) │ 80 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_simple_max_pooling2d_0 │ (None, 113, 113, 8) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_simple_conv2d_1 (Conv2D) │ (None, 113, 113, 16) │ 1,168 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_simple_batch_normalization │ (None, 113, 113, 16) │ 64 │ │ (BatchNormalization) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_simple_activation (Activation) │ (None, 113, 113, 16) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_simple_max_pooling2d_1 │ (None, 56, 56, 16) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_simple_flatten (Flatten) │ (None, 50176) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_simple_dense_0 (Dense) │ (None, 32) │ 1,605,664 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_simple_dense_1 (Dense) │ (None, 4) │ 132 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 1,607,108 (6.13 MB)
Trainable params: 1,607,076 (6.13 MB)
Non-trainable params: 32 (128.00 B)
None
In [87]:
##################################
# Displaying the model layers
# for a simple CNN with batch normalization regularization
##################################
model_bnr_simple_layer_names = [layer.name for layer in model_bnr_simple.layers]
print("Layer Names:", model_bnr_simple_layer_names)
Layer Names: ['bnr_simple_conv2d_0', 'bnr_simple_max_pooling2d_0', 'bnr_simple_conv2d_1', 'bnr_simple_batch_normalization', 'bnr_simple_activation', 'bnr_simple_max_pooling2d_1', 'bnr_simple_flatten', 'bnr_simple_dense_0', 'bnr_simple_dense_1']
In [88]:
##################################
# Displaying the number of weights
# for each model layer
# for a simple CNN with batch normalization regularization
##################################
for layer in model_bnr_simple.layers:
if hasattr(layer, 'weights'):
print(f"Layer: {layer.name}, Number of Weights: {len(layer.get_weights())}")
Layer: bnr_simple_conv2d_0, Number of Weights: 2 Layer: bnr_simple_max_pooling2d_0, Number of Weights: 0 Layer: bnr_simple_conv2d_1, Number of Weights: 2 Layer: bnr_simple_batch_normalization, Number of Weights: 4 Layer: bnr_simple_activation, Number of Weights: 0 Layer: bnr_simple_max_pooling2d_1, Number of Weights: 0 Layer: bnr_simple_flatten, Number of Weights: 0 Layer: bnr_simple_dense_0, Number of Weights: 2 Layer: bnr_simple_dense_1, Number of Weights: 2
In [89]:
##################################
# Displaying the number of weights
# for each model layer
# for a simple CNN with batch normalization regularization
##################################
total_parameters = 0
for layer in model_bnr_simple.layers:
layer_parameters = layer.count_params()
total_parameters += layer_parameters
print(f"Layer: {layer.name}, Parameters: {layer_parameters}")
print("\nTotal Parameters in the Model:", total_parameters)
Layer: bnr_simple_conv2d_0, Parameters: 80 Layer: bnr_simple_max_pooling2d_0, Parameters: 0 Layer: bnr_simple_conv2d_1, Parameters: 1168 Layer: bnr_simple_batch_normalization, Parameters: 64 Layer: bnr_simple_activation, Parameters: 0 Layer: bnr_simple_max_pooling2d_1, Parameters: 0 Layer: bnr_simple_flatten, Parameters: 0 Layer: bnr_simple_dense_0, Parameters: 1605664 Layer: bnr_simple_dense_1, Parameters: 132 Total Parameters in the Model: 1607108
In [90]:
##################################
# Formulating the network architecture
# for a complex CNN with batch normalization regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_bnr_complex = Sequential(name="model_bnr_complex")
model_bnr_complex.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="bnr_complex_conv2d_0"))
model_bnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="bnr_complex_max_pooling2d_0"))
model_bnr_complex.add(Conv2D(filters=32, kernel_size=(3, 3), padding = 'Same', activation='relu', name="bnr_complex_conv2d_1"))
model_bnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="bnr_complex_max_pooling2d_1"))
model_bnr_complex.add(Conv2D(filters=64, kernel_size=(3, 3), padding = 'Same', activation='relu', name="bnr_complex_conv2d_2"))
model_bnr_complex.add(BatchNormalization(name="bnr_complex_batch_normalization"))
model_bnr_complex.add(Activation('relu', name="bnr_complex_activation"))
model_bnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="bnr_complex_max_pooling2d_2"))
model_bnr_complex.add(Flatten(name="bnr_complex_flatten"))
model_bnr_complex.add(Dense(units=128, activation='relu', name="bnr_complex_dense_0"))
model_bnr_complex.add(Dense(units=num_classes, activation='softmax', name="bnr_complex_dense_1"))
In [91]:
##################################
# Displaying the model summary
# for a complex CNN with batch normalization regularization
##################################
print(model_bnr_complex.summary())
Model: "model_bnr_complex"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ bnr_complex_conv2d_0 (Conv2D) │ (None, 227, 227, 16) │ 160 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_complex_max_pooling2d_0 │ (None, 113, 113, 16) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_complex_conv2d_1 (Conv2D) │ (None, 113, 113, 32) │ 4,640 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_complex_max_pooling2d_1 │ (None, 56, 56, 32) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_complex_conv2d_2 (Conv2D) │ (None, 56, 56, 64) │ 18,496 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_complex_batch_normalization │ (None, 56, 56, 64) │ 256 │ │ (BatchNormalization) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_complex_activation (Activation) │ (None, 56, 56, 64) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_complex_max_pooling2d_2 │ (None, 28, 28, 64) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_complex_flatten (Flatten) │ (None, 50176) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_complex_dense_0 (Dense) │ (None, 128) │ 6,422,656 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ bnr_complex_dense_1 (Dense) │ (None, 4) │ 516 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 6,446,724 (24.59 MB)
Trainable params: 6,446,596 (24.59 MB)
Non-trainable params: 128 (512.00 B)
None
In [92]:
##################################
# Displaying the model layers
# for a complex CNN with batch normalization regularization
##################################
model_bnr_complex_layer_names = [layer.name for layer in model_bnr_complex.layers]
print("Layer Names:", model_bnr_complex_layer_names)
Layer Names: ['bnr_complex_conv2d_0', 'bnr_complex_max_pooling2d_0', 'bnr_complex_conv2d_1', 'bnr_complex_max_pooling2d_1', 'bnr_complex_conv2d_2', 'bnr_complex_batch_normalization', 'bnr_complex_activation', 'bnr_complex_max_pooling2d_2', 'bnr_complex_flatten', 'bnr_complex_dense_0', 'bnr_complex_dense_1']
In [93]:
##################################
# Displaying the number of weights
# for each model layer
# for a complex CNN with batch normalization regularization
##################################
for layer in model_bnr_complex.layers:
if hasattr(layer, 'weights'):
print(f"Layer: {layer.name}, Number of Weights: {len(layer.get_weights())}")
Layer: bnr_complex_conv2d_0, Number of Weights: 2 Layer: bnr_complex_max_pooling2d_0, Number of Weights: 0 Layer: bnr_complex_conv2d_1, Number of Weights: 2 Layer: bnr_complex_max_pooling2d_1, Number of Weights: 0 Layer: bnr_complex_conv2d_2, Number of Weights: 2 Layer: bnr_complex_batch_normalization, Number of Weights: 4 Layer: bnr_complex_activation, Number of Weights: 0 Layer: bnr_complex_max_pooling2d_2, Number of Weights: 0 Layer: bnr_complex_flatten, Number of Weights: 0 Layer: bnr_complex_dense_0, Number of Weights: 2 Layer: bnr_complex_dense_1, Number of Weights: 2
In [94]:
##################################
# Displaying the number of weights
# for each model layer
# for a complex CNN with batch normalization regularization
##################################
total_parameters = 0
for layer in model_bnr_complex.layers:
layer_parameters = layer.count_params()
total_parameters += layer_parameters
print(f"Layer: {layer.name}, Parameters: {layer_parameters}")
print("\nTotal Parameters in the Model:", total_parameters)
Layer: bnr_complex_conv2d_0, Parameters: 160 Layer: bnr_complex_max_pooling2d_0, Parameters: 0 Layer: bnr_complex_conv2d_1, Parameters: 4640 Layer: bnr_complex_max_pooling2d_1, Parameters: 0 Layer: bnr_complex_conv2d_2, Parameters: 18496 Layer: bnr_complex_batch_normalization, Parameters: 256 Layer: bnr_complex_activation, Parameters: 0 Layer: bnr_complex_max_pooling2d_2, Parameters: 0 Layer: bnr_complex_flatten, Parameters: 0 Layer: bnr_complex_dense_0, Parameters: 6422656 Layer: bnr_complex_dense_1, Parameters: 516 Total Parameters in the Model: 6446724
1.6.3.4 CNN With Dropout and Batch Normalization Regularization ¶
In [95]:
##################################
# Formulating the network architecture
# for a simple CNN with dropout and batch normalization regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_cdrbnr_simple = Sequential(name="model_cdrbnr_simple")
model_cdrbnr_simple.add(Conv2D(filters=8, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="cdrbnr_simple_conv2d_0"))
model_cdrbnr_simple.add(MaxPooling2D(pool_size=(2, 2), name="cdrbnr_simple_max_pooling2d_0"))
model_cdrbnr_simple.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', name="cdrbnr_simple_conv2d_1"))
model_cdrbnr_simple.add(BatchNormalization(name="cdrbnr_simple_batch_normalization"))
model_cdrbnr_simple.add(Activation('relu', name="cdrbnr_simple_activation"))
model_cdrbnr_simple.add(MaxPooling2D(pool_size=(2, 2), name="cdrbnr_simple_max_pooling2d_1"))
model_cdrbnr_simple.add(Flatten(name="cdrbnr_simple_flatten"))
model_cdrbnr_simple.add(Dense(units=32, activation='relu', name="cdrbnr_simple_dense_0"))
model_cdrbnr_simple.add(Dropout(rate=0.30, name="cdrbnr_simple_dropout"))
model_cdrbnr_simple.add(Dense(units=num_classes, activation='softmax', name="cdrbnr_simple_dense_1"))
In [96]:
##################################
# Displaying the model summary
# for a simple CNN with dropout and batch normalization regularization
##################################
print(model_cdrbnr_simple.summary())
Model: "model_cdrbnr_simple"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ cdrbnr_simple_conv2d_0 (Conv2D) │ (None, 227, 227, 8) │ 80 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_simple_max_pooling2d_0 │ (None, 113, 113, 8) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_simple_conv2d_1 (Conv2D) │ (None, 113, 113, 16) │ 1,168 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_simple_batch_normalization │ (None, 113, 113, 16) │ 64 │ │ (BatchNormalization) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_simple_activation │ (None, 113, 113, 16) │ 0 │ │ (Activation) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_simple_max_pooling2d_1 │ (None, 56, 56, 16) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_simple_flatten (Flatten) │ (None, 50176) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_simple_dense_0 (Dense) │ (None, 32) │ 1,605,664 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_simple_dropout (Dropout) │ (None, 32) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_simple_dense_1 (Dense) │ (None, 4) │ 132 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 1,607,108 (6.13 MB)
Trainable params: 1,607,076 (6.13 MB)
Non-trainable params: 32 (128.00 B)
None
In [97]:
##################################
# Displaying the model layers
# for a simple CNN with dropout and batch normalization regularization
##################################
model_cdrbnr_simple_layer_names = [layer.name for layer in model_cdrbnr_simple.layers]
print("Layer Names:", model_cdrbnr_simple_layer_names)
Layer Names: ['cdrbnr_simple_conv2d_0', 'cdrbnr_simple_max_pooling2d_0', 'cdrbnr_simple_conv2d_1', 'cdrbnr_simple_batch_normalization', 'cdrbnr_simple_activation', 'cdrbnr_simple_max_pooling2d_1', 'cdrbnr_simple_flatten', 'cdrbnr_simple_dense_0', 'cdrbnr_simple_dropout', 'cdrbnr_simple_dense_1']
In [98]:
##################################
# Displaying the number of weights
# for each model layer
# for a simple CNN with dropout and batch normalization regularization
##################################
for layer in model_cdrbnr_simple.layers:
if hasattr(layer, 'weights'):
print(f"Layer: {layer.name}, Number of Weights: {len(layer.get_weights())}")
Layer: cdrbnr_simple_conv2d_0, Number of Weights: 2 Layer: cdrbnr_simple_max_pooling2d_0, Number of Weights: 0 Layer: cdrbnr_simple_conv2d_1, Number of Weights: 2 Layer: cdrbnr_simple_batch_normalization, Number of Weights: 4 Layer: cdrbnr_simple_activation, Number of Weights: 0 Layer: cdrbnr_simple_max_pooling2d_1, Number of Weights: 0 Layer: cdrbnr_simple_flatten, Number of Weights: 0 Layer: cdrbnr_simple_dense_0, Number of Weights: 2 Layer: cdrbnr_simple_dropout, Number of Weights: 0 Layer: cdrbnr_simple_dense_1, Number of Weights: 2
In [99]:
##################################
# Displaying the number of weights
# for each model layer
# for a simple CNN with dropout and batch normalization regularization
##################################
total_parameters = 0
for layer in model_cdrbnr_simple.layers:
layer_parameters = layer.count_params()
total_parameters += layer_parameters
print(f"Layer: {layer.name}, Parameters: {layer_parameters}")
print("\nTotal Parameters in the Model:", total_parameters)
Layer: cdrbnr_simple_conv2d_0, Parameters: 80 Layer: cdrbnr_simple_max_pooling2d_0, Parameters: 0 Layer: cdrbnr_simple_conv2d_1, Parameters: 1168 Layer: cdrbnr_simple_batch_normalization, Parameters: 64 Layer: cdrbnr_simple_activation, Parameters: 0 Layer: cdrbnr_simple_max_pooling2d_1, Parameters: 0 Layer: cdrbnr_simple_flatten, Parameters: 0 Layer: cdrbnr_simple_dense_0, Parameters: 1605664 Layer: cdrbnr_simple_dropout, Parameters: 0 Layer: cdrbnr_simple_dense_1, Parameters: 132 Total Parameters in the Model: 1607108
In [100]:
##################################
# Formulating the network architecture
# for a complex CNN with dropout and batch normalization regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_cdrbnr_complex = Sequential(name="model_cdrbnr_complex")
model_cdrbnr_complex.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="cdrbnr_complex_conv2d_0"))
model_cdrbnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="cdrbnr_complex_max_pooling2d_0"))
model_cdrbnr_complex.add(Conv2D(filters=32, kernel_size=(3, 3), padding = 'Same', activation='relu', name="cdrbnr_complex_conv2d_1"))
model_cdrbnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="cdrbnr_complex_max_pooling2d_1"))
model_cdrbnr_complex.add(Conv2D(filters=64, kernel_size=(3, 3), padding = 'Same', activation='relu', name="cdrbnr_complex_conv2d_2"))
model_cdrbnr_complex.add(BatchNormalization(name="cdrbnr_complex_batch_normalization"))
model_cdrbnr_complex.add(Activation('relu', name="cdrbnr_complex_activation"))
model_cdrbnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="cdrbnr_complex_max_pooling2d_2"))
model_cdrbnr_complex.add(Flatten(name="cdrbnr_complex_flatten"))
model_cdrbnr_complex.add(Dense(units=128, activation='relu', name="cdrbnr_complex_dense_0"))
model_cdrbnr_complex.add(Dropout(rate=0.30, name="cdrbnr_complex_dropout"))
model_cdrbnr_complex.add(Dense(units=num_classes, activation='softmax', name="cdrbnr_complex_dense_1"))
In [101]:
##################################
# Displaying the model summary
# for a complex CNN with dropout and batch normalization regularization
##################################
print(model_cdrbnr_complex.summary())
Model: "model_cdrbnr_complex"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ cdrbnr_complex_conv2d_0 (Conv2D) │ (None, 227, 227, 16) │ 160 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_max_pooling2d_0 │ (None, 113, 113, 16) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_conv2d_1 (Conv2D) │ (None, 113, 113, 32) │ 4,640 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_max_pooling2d_1 │ (None, 56, 56, 32) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_conv2d_2 (Conv2D) │ (None, 56, 56, 64) │ 18,496 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_batch_normalization │ (None, 56, 56, 64) │ 256 │ │ (BatchNormalization) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_activation │ (None, 56, 56, 64) │ 0 │ │ (Activation) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_max_pooling2d_2 │ (None, 28, 28, 64) │ 0 │ │ (MaxPooling2D) │ │ │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_flatten (Flatten) │ (None, 50176) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_dense_0 (Dense) │ (None, 128) │ 6,422,656 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_dropout (Dropout) │ (None, 128) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ cdrbnr_complex_dense_1 (Dense) │ (None, 4) │ 516 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 6,446,724 (24.59 MB)
Trainable params: 6,446,596 (24.59 MB)
Non-trainable params: 128 (512.00 B)
None
In [102]:
##################################
# Displaying the model layers
# for a complex CNN with dropout and batch normalization regularization
##################################
model_cdrbnr_complex_layer_names = [layer.name for layer in model_cdrbnr_complex.layers]
print("Layer Names:", model_cdrbnr_complex_layer_names)
Layer Names: ['cdrbnr_complex_conv2d_0', 'cdrbnr_complex_max_pooling2d_0', 'cdrbnr_complex_conv2d_1', 'cdrbnr_complex_max_pooling2d_1', 'cdrbnr_complex_conv2d_2', 'cdrbnr_complex_batch_normalization', 'cdrbnr_complex_activation', 'cdrbnr_complex_max_pooling2d_2', 'cdrbnr_complex_flatten', 'cdrbnr_complex_dense_0', 'cdrbnr_complex_dropout', 'cdrbnr_complex_dense_1']
In [103]:
##################################
# Displaying the number of weights
# for each model layer
# for a complex CNN with dropout and batch normalization regularization
##################################
for layer in model_cdrbnr_complex.layers:
if hasattr(layer, 'weights'):
print(f"Layer: {layer.name}, Number of Weights: {len(layer.get_weights())}")
Layer: cdrbnr_complex_conv2d_0, Number of Weights: 2 Layer: cdrbnr_complex_max_pooling2d_0, Number of Weights: 0 Layer: cdrbnr_complex_conv2d_1, Number of Weights: 2 Layer: cdrbnr_complex_max_pooling2d_1, Number of Weights: 0 Layer: cdrbnr_complex_conv2d_2, Number of Weights: 2 Layer: cdrbnr_complex_batch_normalization, Number of Weights: 4 Layer: cdrbnr_complex_activation, Number of Weights: 0 Layer: cdrbnr_complex_max_pooling2d_2, Number of Weights: 0 Layer: cdrbnr_complex_flatten, Number of Weights: 0 Layer: cdrbnr_complex_dense_0, Number of Weights: 2 Layer: cdrbnr_complex_dropout, Number of Weights: 0 Layer: cdrbnr_complex_dense_1, Number of Weights: 2
In [104]:
##################################
# Displaying the number of weights
# for each model layer
# for a complex CNN with dropout and batch normalization regularization
##################################
total_parameters = 0
for layer in model_cdrbnr_complex.layers:
layer_parameters = layer.count_params()
total_parameters += layer_parameters
print(f"Layer: {layer.name}, Parameters: {layer_parameters}")
print("\nTotal Parameters in the Model:", total_parameters)
Layer: cdrbnr_complex_conv2d_0, Parameters: 160 Layer: cdrbnr_complex_max_pooling2d_0, Parameters: 0 Layer: cdrbnr_complex_conv2d_1, Parameters: 4640 Layer: cdrbnr_complex_max_pooling2d_1, Parameters: 0 Layer: cdrbnr_complex_conv2d_2, Parameters: 18496 Layer: cdrbnr_complex_batch_normalization, Parameters: 256 Layer: cdrbnr_complex_activation, Parameters: 0 Layer: cdrbnr_complex_max_pooling2d_2, Parameters: 0 Layer: cdrbnr_complex_flatten, Parameters: 0 Layer: cdrbnr_complex_dense_0, Parameters: 6422656 Layer: cdrbnr_complex_dropout, Parameters: 0 Layer: cdrbnr_complex_dense_1, Parameters: 516 Total Parameters in the Model: 6446724
1.6.4 CNN With No Regularization Model Fitting | Hyperparameter Tuning | Validation ¶
In [105]:
##################################
# Formulating the network architecture
# for a simple CNN with no regularization
##################################
set_seed()
batch_size = 32
model_nr_simple = Sequential(name="model_nr_simple")
model_nr_simple.add(Conv2D(filters=8, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="nr_simple_conv2d_0"))
model_nr_simple.add(MaxPooling2D(pool_size=(2, 2), name="nr_simple_max_pooling2d_0"))
model_nr_simple.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', name="nr_simple_conv2d_1"))
model_nr_simple.add(MaxPooling2D(pool_size=(2, 2), name="nr_simple_max_pooling2d_1"))
model_nr_simple.add(Flatten(name="nr_simple_flatten"))
model_nr_simple.add(Dense(units=32, activation='relu', name="nr_simple_dense_0"))
model_nr_simple.add(Dense(units=num_classes, activation='softmax', name="nr_simple_dense_1"))
##################################
# Compiling the network layers
##################################
optimizer = Adam(learning_rate=0.001)
model_nr_simple.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=[Recall(name='recall')])
In [106]:
##################################
# Fitting the model
# for a simple CNN with no regularization
##################################
epochs = 20
set_seed()
model_nr_simple_history = model_nr_simple.fit(train_gen,
steps_per_epoch=len(train_gen)+1,
validation_steps=len(val_gen)+1,
validation_data=val_gen,
epochs=epochs,
verbose=1,
callbacks=[early_stopping, reduce_lr, nr_simple_model_checkpoint])
Epoch 1/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 35s 227ms/step - loss: 0.8697 - recall: 0.4612 - val_loss: 0.9379 - val_recall: 0.6556 - learning_rate: 0.0010 Epoch 2/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 33s 228ms/step - loss: 0.4279 - recall: 0.8129 - val_loss: 0.8684 - val_recall: 0.6792 - learning_rate: 0.0010 Epoch 3/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 35s 240ms/step - loss: 0.3339 - recall: 0.8637 - val_loss: 0.8071 - val_recall: 0.7239 - learning_rate: 0.0010 Epoch 4/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 33s 227ms/step - loss: 0.3062 - recall: 0.8771 - val_loss: 0.9367 - val_recall: 0.7528 - learning_rate: 0.0010 Epoch 5/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 53s 367ms/step - loss: 0.2505 - recall: 0.9024 - val_loss: 0.8099 - val_recall: 0.7450 - learning_rate: 0.0010 Epoch 6/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 34s 239ms/step - loss: 0.2282 - recall: 0.9033 - val_loss: 0.7319 - val_recall: 0.7862 - learning_rate: 0.0010 Epoch 7/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 39s 225ms/step - loss: 0.1857 - recall: 0.9301 - val_loss: 0.8285 - val_recall: 0.7783 - learning_rate: 0.0010 Epoch 8/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 222ms/step - loss: 0.1783 - recall: 0.9361 - val_loss: 0.8437 - val_recall: 0.7642 - learning_rate: 0.0010 Epoch 9/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 224ms/step - loss: 0.1366 - recall: 0.9491 - val_loss: 0.8675 - val_recall: 0.8089 - learning_rate: 0.0010 Epoch 10/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 223ms/step - loss: 0.1127 - recall: 0.9611 - val_loss: 0.7600 - val_recall: 0.8186 - learning_rate: 1.0000e-04 Epoch 11/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 220ms/step - loss: 0.0880 - recall: 0.9663 - val_loss: 0.7769 - val_recall: 0.8177 - learning_rate: 1.0000e-04 Epoch 12/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 221ms/step - loss: 0.1055 - recall: 0.9612 - val_loss: 0.7722 - val_recall: 0.8221 - learning_rate: 1.0000e-04 Epoch 13/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 220ms/step - loss: 0.0787 - recall: 0.9733 - val_loss: 0.7732 - val_recall: 0.8221 - learning_rate: 1.0000e-05 Epoch 14/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 224ms/step - loss: 0.0926 - recall: 0.9680 - val_loss: 0.7768 - val_recall: 0.8221 - learning_rate: 1.0000e-05 Epoch 15/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 222ms/step - loss: 0.0824 - recall: 0.9691 - val_loss: 0.7803 - val_recall: 0.8212 - learning_rate: 1.0000e-05 Epoch 16/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 225ms/step - loss: 0.0939 - recall: 0.9701 - val_loss: 0.7808 - val_recall: 0.8203 - learning_rate: 1.0000e-06
In [107]:
##################################
# Evaluating the model
# for a simple CNN with no regularization
# on the independent validation set
##################################
model_nr_simple_y_pred = model_nr_simple.predict(val_gen)
36/36 ━━━━━━━━━━━━━━━━━━━━ 4s 98ms/step
In [108]:
##################################
# Plotting the loss profile
# for a simple CNN with no regularization
# on the training and validation sets
##################################
plot_training_history(model_nr_simple_history, 'Simple CNN With No Regularization : ')
In [109]:
##################################
# Consolidating the predictions
# for a simple CNN with no regularization
# on the validation set
##################################
model_nr_simple_predictions = np.array(list(map(lambda x: np.argmax(x), model_nr_simple_y_pred)))
model_nr_simple_y_true = val_gen.classes
##################################
# Formulating the confusion matrix
# for a simple CNN with no regularization
# on the validation set
##################################
CMatrix = pd.DataFrame(confusion_matrix(model_nr_simple_y_true, model_nr_simple_predictions), columns=classes, index =classes)
##################################
# Plotting the confusion matrix
# for a simple CNN with no regularization
# on the validation set
##################################
plt.figure(figsize=(10, 6))
ax = sns.heatmap(CMatrix, annot = True, fmt = 'g' ,vmin = 0, vmax = 250,cmap = 'icefire')
ax.set_xlabel('Predicted',fontsize = 14,weight = 'bold')
ax.set_xticklabels(ax.get_xticklabels(),rotation =0)
ax.set_ylabel('Actual',fontsize = 14,weight = 'bold')
ax.set_yticklabels(ax.get_yticklabels(),rotation =0)
ax.set_title('Simple CNN With No Regularization : Validation Set Confusion Matrix',fontsize = 14, weight = 'bold',pad=20);
##################################
# Resetting all states generated by Keras
##################################
keras.backend.clear_session()
WARNING:tensorflow:From D:\Github_Codes\ProjectPortfolio\Portfolio_Project_56\mdeploy_venv\Lib\site-packages\keras\src\backend\common\global_state.py:82: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.
In [110]:
##################################
# Calculating the model accuracy
# for a simple CNN with no regularization
# for the entire validation set
##################################
model_nr_simple_acc = accuracy_score(model_nr_simple_y_true, model_nr_simple_predictions)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with no regularization
# for the entire validation set
##################################
model_nr_simple_results_all = precision_recall_fscore_support(model_nr_simple_y_true, model_nr_simple_predictions, average='macro',zero_division = 1)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with no regularization
# for each category of the validation set
##################################
model_nr_simple_results_class = precision_recall_fscore_support(model_nr_simple_y_true, model_nr_simple_predictions, average=None, zero_division = 1)
##################################
# Consolidating all model evaluation metrics
# for a simple CNN with no regularization
##################################
metric_columns = ['Precision','Recall','F-Score','Support']
model_nr_simple_all_df = pd.concat([pd.DataFrame(list(model_nr_simple_results_class)).T,pd.DataFrame(list(model_nr_simple_results_all)).T])
model_nr_simple_all_df.columns = metric_columns
model_nr_simple_all_df.index = ['No Tumor', 'Glioma', 'Meningioma', 'Pituitary', 'Total']
print('Simple CNN With No Regularization : Validation Set Classification Performance')
model_nr_simple_all_df
Simple CNN With No Regularization : Validation Set Classification Performance
Out[110]:
| Precision | Recall | F-Score | Support | |
|---|---|---|---|---|
| No Tumor | 0.893238 | 0.786834 | 0.836667 | 319.0 |
| Glioma | 0.928571 | 0.787879 | 0.852459 | 264.0 |
| Meningioma | 0.624573 | 0.685393 | 0.653571 | 267.0 |
| Pituitary | 0.772595 | 0.910653 | 0.835962 | 291.0 |
| Total | 0.804744 | 0.792690 | 0.794665 | NaN |
In [111]:
##################################
# Consolidating all model evaluation metrics
# for a simple CNN with no regularization
##################################
model_nr_simple_model_list = []
model_nr_simple_measure_list = []
model_nr_simple_category_list = []
model_nr_simple_value_list = []
for i in range(3):
for j in range(5):
model_nr_simple_model_list.append('CNN_NR_Simple')
model_nr_simple_measure_list.append(metric_columns[i])
model_nr_simple_category_list.append(model_nr_simple_all_df.index[j])
model_nr_simple_value_list.append(model_nr_simple_all_df.iloc[j,i])
model_nr_simple_all_summary = pd.DataFrame(zip(model_nr_simple_model_list,
model_nr_simple_measure_list,
model_nr_simple_category_list,
model_nr_simple_value_list),
columns=['CNN.Model.Name',
'Model.Metric',
'Image.Category',
'Metric.Value'])
In [112]:
##################################
# Formulating the network architecture
# for a complex CNN with no regularization
##################################
set_seed()
batch_size = 32
model_nr_complex = Sequential(name="model_nr_complex")
model_nr_complex.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="nr_complex_conv2d_0"))
model_nr_complex.add(MaxPooling2D(pool_size=(2, 2), name="nr_complex_max_pooling2d_0"))
model_nr_complex.add(Conv2D(filters=32, kernel_size=(3, 3), padding = 'Same', activation='relu', name="nr_complex_conv2d_1"))
model_nr_complex.add(MaxPooling2D(pool_size=(2, 2), name="nr_complex_max_pooling2d_1"))
model_nr_complex.add(Conv2D(filters=64, kernel_size=(3, 3), padding = 'Same', activation='relu', name="nr_complex_conv2d_2"))
model_nr_complex.add(MaxPooling2D(pool_size=(2, 2), name="nr_complex_max_pooling2d_2"))
model_nr_complex.add(Flatten(name="nr_complex_flatten"))
model_nr_complex.add(Dense(units=128, activation='relu', name="nr_complex_dense_0"))
model_nr_complex.add(Dense(units=num_classes, activation='softmax', name="nr_complex_dense_1"))
##################################
# Compiling the network layers
##################################
optimizer = Adam(learning_rate=0.001)
model_nr_complex.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=[Recall(name='recall')])
In [113]:
##################################
# Fitting the model
# for a complex CNN with no regularization
##################################
epochs = 20
set_seed()
model_nr_complex_history = model_nr_complex.fit(train_gen,
steps_per_epoch=len(train_gen)+1,
validation_steps=len(val_gen)+1,
validation_data=val_gen,
epochs=epochs,
verbose=1,
callbacks=[early_stopping, reduce_lr, nr_complex_model_checkpoint])
Epoch 1/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 55s 371ms/step - loss: 1.0913 - recall: 0.3645 - val_loss: 0.8411 - val_recall: 0.6915 - learning_rate: 0.0010 Epoch 2/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 377ms/step - loss: 0.4091 - recall: 0.8322 - val_loss: 0.8689 - val_recall: 0.6862 - learning_rate: 0.0010 Epoch 3/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 52s 359ms/step - loss: 0.2674 - recall: 0.8948 - val_loss: 0.8096 - val_recall: 0.7327 - learning_rate: 0.0010 Epoch 4/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 373ms/step - loss: 0.2156 - recall: 0.9202 - val_loss: 0.8086 - val_recall: 0.7862 - learning_rate: 0.0010 Epoch 5/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 357ms/step - loss: 0.1748 - recall: 0.9339 - val_loss: 0.8040 - val_recall: 0.7625 - learning_rate: 0.0010 Epoch 6/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 83s 362ms/step - loss: 0.1469 - recall: 0.9431 - val_loss: 0.7236 - val_recall: 0.7984 - learning_rate: 0.0010 Epoch 7/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 50s 348ms/step - loss: 0.1025 - recall: 0.9621 - val_loss: 0.7801 - val_recall: 0.7993 - learning_rate: 0.0010 Epoch 8/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 354ms/step - loss: 0.0918 - recall: 0.9644 - val_loss: 0.9317 - val_recall: 0.8063 - learning_rate: 0.0010 Epoch 9/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 355ms/step - loss: 0.0861 - recall: 0.9650 - val_loss: 0.8448 - val_recall: 0.8238 - learning_rate: 0.0010 Epoch 10/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 356ms/step - loss: 0.0560 - recall: 0.9774 - val_loss: 0.8052 - val_recall: 0.8300 - learning_rate: 1.0000e-04 Epoch 11/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 354ms/step - loss: 0.0285 - recall: 0.9933 - val_loss: 0.8621 - val_recall: 0.8186 - learning_rate: 1.0000e-04 Epoch 12/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 353ms/step - loss: 0.0294 - recall: 0.9919 - val_loss: 0.8798 - val_recall: 0.8256 - learning_rate: 1.0000e-04 Epoch 13/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 353ms/step - loss: 0.0235 - recall: 0.9925 - val_loss: 0.8846 - val_recall: 0.8230 - learning_rate: 1.0000e-05 Epoch 14/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 356ms/step - loss: 0.0297 - recall: 0.9903 - val_loss: 0.8888 - val_recall: 0.8247 - learning_rate: 1.0000e-05 Epoch 15/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 354ms/step - loss: 0.0237 - recall: 0.9951 - val_loss: 0.9018 - val_recall: 0.8230 - learning_rate: 1.0000e-05 Epoch 16/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 351ms/step - loss: 0.0283 - recall: 0.9913 - val_loss: 0.9021 - val_recall: 0.8238 - learning_rate: 1.0000e-06
In [114]:
##################################
# Evaluating the model
# for a complex CNN with no regularization
# on the independent validation set
##################################
model_nr_complex_y_pred = model_nr_complex.predict(val_gen)
36/36 ━━━━━━━━━━━━━━━━━━━━ 5s 132ms/step
In [115]:
##################################
# Plotting the loss profile
# for a complex CNN with no regularization
# on the training and validation sets
##################################
plot_training_history(model_nr_complex_history, 'Complex CNN With No Regularization : ')
In [116]:
##################################
# Consolidating the predictions
# for a complex CNN with no regularization
# on the validation set
##################################
model_nr_complex_predictions = np.array(list(map(lambda x: np.argmax(x), model_nr_complex_y_pred)))
model_nr_complex_y_true = val_gen.classes
##################################
# Formulating the confusion matrix
# for a complex CNN with no regularization
# on the validation set
##################################
CMatrix = pd.DataFrame(confusion_matrix(model_nr_complex_y_true, model_nr_complex_predictions), columns=classes, index =classes)
##################################
# Plotting the confusion matrix
# for a complex CNN with no regularization
# on the validation set
##################################
plt.figure(figsize=(10, 6))
ax = sns.heatmap(CMatrix, annot = True, fmt = 'g' ,vmin = 0, vmax = 250,cmap = 'icefire')
ax.set_xlabel('Predicted',fontsize = 14,weight = 'bold')
ax.set_xticklabels(ax.get_xticklabels(),rotation =0)
ax.set_ylabel('Actual',fontsize = 14,weight = 'bold')
ax.set_yticklabels(ax.get_yticklabels(),rotation =0)
ax.set_title('Complex CNN With No Regularization : Validation Set Confusion Matrix',fontsize = 14, weight = 'bold',pad=20);
##################################
# Resetting all states generated by Keras
##################################
keras.backend.clear_session()
In [117]:
##################################
# Calculating the model accuracy
# for a complex CNN with no regularization
# for the entire validation set
##################################
model_nr_complex_acc = accuracy_score(model_nr_complex_y_true, model_nr_complex_predictions)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with no regularization
# for the entire validation set
##################################
model_nr_complex_results_all = precision_recall_fscore_support(model_nr_complex_y_true, model_nr_complex_predictions, average='macro',zero_division = 1)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with no regularization
# for each category of the validation set
##################################
model_nr_complex_results_class = precision_recall_fscore_support(model_nr_complex_y_true, model_nr_complex_predictions, average=None, zero_division = 1)
##################################
# Consolidating all model evaluation metrics
# for a complex CNN with no regularization
##################################
metric_columns = ['Precision','Recall','F-Score','Support']
model_nr_complex_all_df = pd.concat([pd.DataFrame(list(model_nr_complex_results_class)).T,pd.DataFrame(list(model_nr_complex_results_all)).T])
model_nr_complex_all_df.columns = metric_columns
model_nr_complex_all_df.index = ['No Tumor', 'Glioma', 'Meningioma', 'Pituitary', 'Total']
print('Complex CNN With No Regularization : Validation Set Classification Performance')
model_nr_complex_all_df
Complex CNN With No Regularization : Validation Set Classification Performance
Out[117]:
| Precision | Recall | F-Score | Support | |
|---|---|---|---|---|
| No Tumor | 0.863057 | 0.849530 | 0.856240 | 319.0 |
| Glioma | 0.871486 | 0.821970 | 0.846004 | 264.0 |
| Meningioma | 0.655602 | 0.591760 | 0.622047 | 267.0 |
| Pituitary | 0.795252 | 0.920962 | 0.853503 | 291.0 |
| Total | 0.796349 | 0.796055 | 0.794449 | NaN |
In [118]:
##################################
# Consolidating all model evaluation metrics
# for a complex CNN with no regularization
##################################
model_nr_complex_model_list = []
model_nr_complex_measure_list = []
model_nr_complex_category_list = []
model_nr_complex_value_list = []
for i in range(3):
for j in range(5):
model_nr_complex_model_list.append('CNN_NR_Complex')
model_nr_complex_measure_list.append(metric_columns[i])
model_nr_complex_category_list.append(model_nr_complex_all_df.index[j])
model_nr_complex_value_list.append(model_nr_complex_all_df.iloc[j,i])
model_nr_complex_all_summary = pd.DataFrame(zip(model_nr_complex_model_list,
model_nr_complex_measure_list,
model_nr_complex_category_list,
model_nr_complex_value_list),
columns=['CNN.Model.Name',
'Model.Metric',
'Image.Category',
'Metric.Value'])
1.6.5 CNN With Dropout Regularization Model Fitting | Hyperparameter Tuning | Validation ¶
In [119]:
##################################
# Formulating the network architecture
# for a simple CNN with dropout regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_dr_simple = Sequential(name="model_dr_simple")
model_dr_simple.add(Conv2D(filters=8, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="dr_simple_conv2d_0"))
model_dr_simple.add(MaxPooling2D(pool_size=(2, 2), name="dr_simple_max_pooling2d_0"))
model_dr_simple.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', name="dr_simple_conv2d_1"))
model_dr_simple.add(MaxPooling2D(pool_size=(2, 2), name="dr_simple_max_pooling2d_1"))
model_dr_simple.add(Flatten(name="dr_simple_flatten"))
model_dr_simple.add(Dense(units=32, activation='relu', name="dr_simple_dense_0"))
model_dr_simple.add(Dropout(rate=0.30, name="dr_simple_dropout"))
model_dr_simple.add(Dense(units=num_classes, activation='softmax', name="dr_simple_dense_1"))
##################################
# Compiling the network layers
##################################
optimizer = Adam(learning_rate=0.001)
model_dr_simple.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[Recall(name='recall')])
In [120]:
##################################
# Fitting the model
# for a simple CNN with dropout regularization
##################################
epochs = 20
set_seed()
model_dr_simple_history = model_dr_simple.fit(train_gen,
steps_per_epoch=len(train_gen)+1,
validation_steps=len(val_gen)+1,
validation_data=val_gen,
epochs=epochs,
verbose=1,
callbacks=[early_stopping, reduce_lr, dr_simple_model_checkpoint])
Epoch 1/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 35s 234ms/step - loss: 1.3558 - recall: 0.1436 - val_loss: 1.0029 - val_recall: 0.4259 - learning_rate: 0.0010 Epoch 2/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 34s 234ms/step - loss: 0.7573 - recall: 0.5541 - val_loss: 0.8809 - val_recall: 0.5995 - learning_rate: 0.0010 Epoch 3/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 31s 218ms/step - loss: 0.6801 - recall: 0.5991 - val_loss: 0.8098 - val_recall: 0.6784 - learning_rate: 0.0010 Epoch 4/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 31s 217ms/step - loss: 0.5949 - recall: 0.6555 - val_loss: 0.9510 - val_recall: 0.6319 - learning_rate: 0.0010 Epoch 5/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 31s 215ms/step - loss: 0.5358 - recall: 0.6888 - val_loss: 0.8406 - val_recall: 0.6687 - learning_rate: 0.0010 Epoch 6/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 31s 214ms/step - loss: 0.5175 - recall: 0.7039 - val_loss: 0.7385 - val_recall: 0.6950 - learning_rate: 0.0010 Epoch 7/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 220ms/step - loss: 0.5096 - recall: 0.7264 - val_loss: 0.8432 - val_recall: 0.7108 - learning_rate: 0.0010 Epoch 8/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 38s 263ms/step - loss: 0.5263 - recall: 0.7275 - val_loss: 0.7060 - val_recall: 0.7432 - learning_rate: 0.0010 Epoch 9/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 36s 225ms/step - loss: 0.4338 - recall: 0.7747 - val_loss: 0.8316 - val_recall: 0.7546 - learning_rate: 0.0010 Epoch 10/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 34s 235ms/step - loss: 0.4617 - recall: 0.7647 - val_loss: 0.8108 - val_recall: 0.7432 - learning_rate: 0.0010 Epoch 11/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 31s 218ms/step - loss: 0.4197 - recall: 0.7834 - val_loss: 0.8501 - val_recall: 0.7406 - learning_rate: 0.0010 Epoch 12/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 31s 216ms/step - loss: 0.4121 - recall: 0.7925 - val_loss: 0.7721 - val_recall: 0.7634 - learning_rate: 1.0000e-04 Epoch 13/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 31s 215ms/step - loss: 0.3817 - recall: 0.8064 - val_loss: 0.7482 - val_recall: 0.7713 - learning_rate: 1.0000e-04 Epoch 14/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 220ms/step - loss: 0.3763 - recall: 0.8102 - val_loss: 0.7683 - val_recall: 0.7634 - learning_rate: 1.0000e-04 Epoch 15/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 31s 218ms/step - loss: 0.3781 - recall: 0.7994 - val_loss: 0.7877 - val_recall: 0.7642 - learning_rate: 1.0000e-05 Epoch 16/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 221ms/step - loss: 0.3701 - recall: 0.8088 - val_loss: 0.7936 - val_recall: 0.7642 - learning_rate: 1.0000e-05 Epoch 17/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 32s 222ms/step - loss: 0.3933 - recall: 0.8056 - val_loss: 0.7841 - val_recall: 0.7660 - learning_rate: 1.0000e-05 Epoch 18/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 31s 219ms/step - loss: 0.3852 - recall: 0.7920 - val_loss: 0.7832 - val_recall: 0.7660 - learning_rate: 1.0000e-06
In [121]:
##################################
# Evaluating the model
# for a simple CNN with dropout regularization
# on the independent validation set
##################################
model_dr_simple_y_pred = model_dr_simple.predict(val_gen)
36/36 ━━━━━━━━━━━━━━━━━━━━ 4s 105ms/step
In [122]:
##################################
# Plotting the loss profile
# for a simple CNN with dropout regularization
# on the training and validation sets
##################################
plot_training_history(model_dr_simple_history, 'Simple CNN With Dropout Regularization : ')
In [123]:
##################################
# Consolidating the predictions
# for a simple CNN with dropout regularization
# on the validation set
##################################
model_dr_simple_predictions = np.array(list(map(lambda x: np.argmax(x), model_dr_simple_y_pred)))
model_dr_simple_y_true=val_gen.classes
##################################
# Formulating the confusion matrix
# for a simple CNN with dropout regularization
# on the validation set
##################################
CMatrix = pd.DataFrame(confusion_matrix(model_dr_simple_y_true, model_dr_simple_predictions), columns=classes, index =classes)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with dropout regularization
# for each category of the validation set
##################################
plt.figure(figsize=(10, 6))
ax = sns.heatmap(CMatrix, annot = True, fmt = 'g' ,vmin = 0, vmax = 250, cmap = 'icefire')
ax.set_xlabel('Predicted',fontsize = 14,weight = 'bold')
ax.set_xticklabels(ax.get_xticklabels(),rotation =0)
ax.set_ylabel('Actual',fontsize = 14,weight = 'bold')
ax.set_yticklabels(ax.get_yticklabels(),rotation =0)
ax.set_title('Simple CNN With Dropout Regularization : Validation Set Confusion Matrix',fontsize = 14, weight = 'bold', pad=20);
##################################
# Resetting all states generated by Keras
##################################
keras.backend.clear_session()
In [124]:
##################################
# Calculating the model accuracy
# for a simple CNN with dropout regularization
# for the entire validation set
##################################
model_dr_simple_acc = accuracy_score(model_dr_simple_y_true, model_dr_simple_predictions)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with dropout regularization
# for the entire validation set
##################################
model_dr_simple_results_all = precision_recall_fscore_support(model_dr_simple_y_true, model_dr_simple_predictions, average='macro',zero_division = 1)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with dropout regularization
# for each category of the validation set
##################################
model_dr_simple_results_class = precision_recall_fscore_support(model_dr_simple_y_true, model_dr_simple_predictions, average=None, zero_division = 1)
##################################
# Consolidating all model evaluation metrics
# for a simple CNN with dropout regularization
##################################
metric_columns = ['Precision','Recall', 'F-Score','Support']
model_dr_simple_all_df = pd.concat([pd.DataFrame(list(model_dr_simple_results_class)).T,pd.DataFrame(list(model_dr_simple_results_all)).T])
model_dr_simple_all_df.columns = metric_columns
model_dr_simple_all_df.index = ['No Tumor', 'Glioma', 'Meningioma', 'Pituitary', 'Total']
print('Simple CNN With Dropout Regularization : Validation Set Classification Performance')
model_dr_simple_all_df
Simple CNN With Dropout Regularization : Validation Set Classification Performance
Out[124]:
| Precision | Recall | F-Score | Support | |
|---|---|---|---|---|
| No Tumor | 0.852459 | 0.815047 | 0.833333 | 319.0 |
| Glioma | 0.902778 | 0.738636 | 0.812500 | 264.0 |
| Meningioma | 0.554054 | 0.614232 | 0.582593 | 267.0 |
| Pituitary | 0.774691 | 0.862543 | 0.816260 | 291.0 |
| Total | 0.770996 | 0.757615 | 0.761172 | NaN |
In [125]:
##################################
# Consolidating all model evaluation metrics
# for a simple CNN with dropout regularization
##################################
model_dr_simple_model_list = []
model_dr_simple_measure_list = []
model_dr_simple_category_list = []
model_dr_simple_value_list = []
for i in range(3):
for j in range(5):
model_dr_simple_model_list.append('CNN_DR_Simple')
model_dr_simple_measure_list.append(metric_columns[i])
model_dr_simple_category_list.append(model_dr_simple_all_df.index[j])
model_dr_simple_value_list.append(model_dr_simple_all_df.iloc[j,i])
model_dr_simple_all_summary = pd.DataFrame(zip(model_dr_simple_model_list,
model_dr_simple_measure_list,
model_dr_simple_category_list,
model_dr_simple_value_list),
columns=['CNN.Model.Name',
'Model.Metric',
'Image.Category',
'Metric.Value'])
In [126]:
##################################
# Formulating the network architecture
# for a complex CNN with dropout regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_dr_complex = Sequential(name="model_dr_complex")
model_dr_complex.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="dr_complex_conv2d_0"))
model_dr_complex.add(MaxPooling2D(pool_size=(2, 2), name="dr_complex_max_pooling2d_0"))
model_dr_complex.add(Conv2D(filters=32, kernel_size=(3, 3), padding = 'Same', activation='relu', name="dr_complex_conv2d_1"))
model_dr_complex.add(MaxPooling2D(pool_size=(2, 2), name="dr_complex_max_pooling2d_1"))
model_dr_complex.add(Conv2D(filters=64, kernel_size=(3, 3), padding = 'Same', activation='relu', name="dr_complex_conv2d_2"))
model_dr_complex.add(MaxPooling2D(pool_size=(2, 2), name="dr_complex_max_pooling2d_2"))
model_dr_complex.add(Flatten(name="dr_complex_flatten"))
model_dr_complex.add(Dense(units=128, activation='relu', name="dr_complex_dense_0"))
model_dr_complex.add(Dropout(rate=0.30, name="dr_complex_dropout"))
model_dr_complex.add(Dense(units=num_classes, activation='softmax', name="dr_complex_dense_1"))
##################################
# Compiling the network layers
##################################
optimizer = Adam(learning_rate=0.001)
model_dr_complex.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[Recall(name='recall')])
In [127]:
##################################
# Fitting the model
# for a complex CNN with dropout regularization
##################################
epochs = 20
set_seed()
model_dr_complex_history = model_dr_complex.fit(train_gen,
steps_per_epoch=len(train_gen)+1,
validation_steps=len(val_gen)+1,
validation_data=val_gen,
epochs=epochs,
verbose=1,
callbacks=[early_stopping, reduce_lr, dr_complex_model_checkpoint])
Epoch 1/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 52s 354ms/step - loss: 1.0131 - recall: 0.3707 - val_loss: 0.8088 - val_recall: 0.6994 - learning_rate: 0.0010 Epoch 2/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 358ms/step - loss: 0.4345 - recall: 0.8110 - val_loss: 0.7967 - val_recall: 0.6968 - learning_rate: 0.0010 Epoch 3/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 82s 359ms/step - loss: 0.2910 - recall: 0.8898 - val_loss: 0.7494 - val_recall: 0.7458 - learning_rate: 0.0010 Epoch 4/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 50s 350ms/step - loss: 0.2426 - recall: 0.9008 - val_loss: 0.7891 - val_recall: 0.7511 - learning_rate: 0.0010 Epoch 5/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 353ms/step - loss: 0.1822 - recall: 0.9304 - val_loss: 0.6271 - val_recall: 0.7844 - learning_rate: 0.0010 Epoch 6/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 50s 350ms/step - loss: 0.1632 - recall: 0.9328 - val_loss: 0.7265 - val_recall: 0.7774 - learning_rate: 0.0010 Epoch 7/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 50s 348ms/step - loss: 0.1317 - recall: 0.9478 - val_loss: 0.8423 - val_recall: 0.7862 - learning_rate: 0.0010 Epoch 8/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 50s 349ms/step - loss: 0.1286 - recall: 0.9583 - val_loss: 0.8516 - val_recall: 0.8107 - learning_rate: 0.0010 Epoch 9/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 50s 349ms/step - loss: 0.0860 - recall: 0.9707 - val_loss: 0.7973 - val_recall: 0.8124 - learning_rate: 1.0000e-04 Epoch 10/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 50s 346ms/step - loss: 0.0758 - recall: 0.9745 - val_loss: 0.8234 - val_recall: 0.8081 - learning_rate: 1.0000e-04 Epoch 11/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 50s 347ms/step - loss: 0.0523 - recall: 0.9825 - val_loss: 0.8551 - val_recall: 0.8098 - learning_rate: 1.0000e-04 Epoch 12/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 353ms/step - loss: 0.0571 - recall: 0.9813 - val_loss: 0.8562 - val_recall: 0.8054 - learning_rate: 1.0000e-05 Epoch 13/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 356ms/step - loss: 0.0540 - recall: 0.9823 - val_loss: 0.8620 - val_recall: 0.8089 - learning_rate: 1.0000e-05 Epoch 14/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 51s 352ms/step - loss: 0.0564 - recall: 0.9793 - val_loss: 0.8652 - val_recall: 0.8098 - learning_rate: 1.0000e-05 Epoch 15/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 50s 348ms/step - loss: 0.0538 - recall: 0.9812 - val_loss: 0.8655 - val_recall: 0.8098 - learning_rate: 1.0000e-06
In [128]:
##################################
# Evaluating the model
# for a complex CNN with dropout regularization
# on the independent validation set
##################################
model_dr_complex_y_pred = model_dr_complex.predict(val_gen)
36/36 ━━━━━━━━━━━━━━━━━━━━ 5s 133ms/step
In [129]:
##################################
# Plotting the loss profile
# for a complex CNN with dropout regularization
# on the training and validation sets
##################################
plot_training_history(model_dr_complex_history, 'Complex CNN With Dropout Regularization : ')
In [130]:
##################################
# Consolidating the predictions
# for a complex CNN with dropout regularization
# on the validation set
##################################
model_dr_complex_predictions = np.array(list(map(lambda x: np.argmax(x), model_dr_complex_y_pred)))
model_dr_complex_y_true=val_gen.classes
##################################
# Formulating the confusion matrix
# for a complex CNN with dropout regularization
# on the validation set
##################################
CMatrix = pd.DataFrame(confusion_matrix(model_dr_complex_y_true, model_dr_complex_predictions), columns=classes, index =classes)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with dropout regularization
# for each category of the validation set
##################################
plt.figure(figsize=(10, 6))
ax = sns.heatmap(CMatrix, annot = True, fmt = 'g' ,vmin = 0, vmax = 250, cmap = 'icefire')
ax.set_xlabel('Predicted',fontsize = 14,weight = 'bold')
ax.set_xticklabels(ax.get_xticklabels(),rotation =0)
ax.set_ylabel('Actual',fontsize = 14,weight = 'bold')
ax.set_yticklabels(ax.get_yticklabels(),rotation =0)
ax.set_title('Complex CNN With Dropout Regularization : Validation Set Confusion Matrix',fontsize = 14, weight = 'bold', pad=20);
##################################
# Resetting all states generated by Keras
##################################
keras.backend.clear_session()
In [131]:
##################################
# Calculating the model accuracy
# for a complex CNN with dropout regularization
# for the entire validation set
##################################
model_dr_complex_acc = accuracy_score(model_dr_complex_y_true, model_dr_complex_predictions)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with dropout regularization
# for the entire validation set
##################################
model_dr_complex_results_all = precision_recall_fscore_support(model_dr_complex_y_true, model_dr_complex_predictions, average='macro',zero_division = 1)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with dropout regularization
# for each category of the validation set
##################################
model_dr_complex_results_class = precision_recall_fscore_support(model_dr_complex_y_true, model_dr_complex_predictions, average=None, zero_division = 1)
##################################
# Consolidating all model evaluation metrics
# for a complex CNN with dropout regularization
##################################
metric_columns = ['Precision','Recall', 'F-Score','Support']
model_dr_complex_all_df = pd.concat([pd.DataFrame(list(model_dr_complex_results_class)).T,pd.DataFrame(list(model_dr_complex_results_all)).T])
model_dr_complex_all_df.columns = metric_columns
model_dr_complex_all_df.index = ['No Tumor', 'Glioma', 'Meningioma', 'Pituitary', 'Total']
print('Complex CNN With Dropout Regularization : Validation Set Classification Performance')
model_dr_complex_all_df
Complex CNN With Dropout Regularization : Validation Set Classification Performance
Out[131]:
| Precision | Recall | F-Score | Support | |
|---|---|---|---|---|
| No Tumor | 0.835913 | 0.846395 | 0.841121 | 319.0 |
| Glioma | 0.858238 | 0.848485 | 0.853333 | 264.0 |
| Meningioma | 0.641975 | 0.584270 | 0.611765 | 267.0 |
| Pituitary | 0.815287 | 0.879725 | 0.846281 | 291.0 |
| Total | 0.787853 | 0.789719 | 0.788125 | NaN |
In [132]:
##################################
# Consolidating all model evaluation metrics
# for a complex CNN with dropout regularization
##################################
model_dr_complex_model_list = []
model_dr_complex_measure_list = []
model_dr_complex_category_list = []
model_dr_complex_value_list = []
for i in range(3):
for j in range(5):
model_dr_complex_model_list.append('CNN_DR_Complex')
model_dr_complex_measure_list.append(metric_columns[i])
model_dr_complex_category_list.append(model_dr_complex_all_df.index[j])
model_dr_complex_value_list.append(model_dr_complex_all_df.iloc[j,i])
model_dr_complex_all_summary = pd.DataFrame(zip(model_dr_complex_model_list,
model_dr_complex_measure_list,
model_dr_complex_category_list,
model_dr_complex_value_list),
columns=['CNN.Model.Name',
'Model.Metric',
'Image.Category',
'Metric.Value'])
1.6.6 CNN With Batch Normalization Regularization Model Fitting | Hyperparameter Tuning | Validation ¶
In [133]:
##################################
# Formulating the network architecture
# for a simple CNN with batch normalization regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_bnr_simple = Sequential(name="model_bnr_simple")
model_bnr_simple.add(Conv2D(filters=8, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="bnr_simple_conv2d_0"))
model_bnr_simple.add(MaxPooling2D(pool_size=(2, 2), name="bnr_simple_max_pooling2d_0"))
model_bnr_simple.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', name="bnr_simple_conv2d_1"))
model_bnr_simple.add(BatchNormalization(name="bnr_simple_batch_normalization"))
model_bnr_simple.add(Activation('relu', name="bnr_simple_activation"))
model_bnr_simple.add(MaxPooling2D(pool_size=(2, 2), name="bnr_simple_max_pooling2d_1"))
model_bnr_simple.add(Flatten(name="bnr_simple_flatten"))
model_bnr_simple.add(Dense(units=32, activation='relu', name="bnr_simple_dense_0"))
model_bnr_simple.add(Dense(units=num_classes, activation='softmax', name="bnr_simple_dense_1"))
##################################
# Compiling the network layers
##################################
optimizer = Adam(learning_rate=0.001)
model_bnr_simple.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[Recall(name='recall')])
In [134]:
##################################
# Fitting the model
# for a simple CNN with batch normalization regularization
##################################
epochs = 20
set_seed()
model_bnr_simple_history = model_bnr_simple.fit(train_gen,
steps_per_epoch=len(train_gen)+1,
validation_steps=len(val_gen)+1,
validation_data=val_gen,
epochs=epochs,
verbose=1,
callbacks=[early_stopping, reduce_lr, bnr_simple_model_checkpoint])
Epoch 1/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 38s 255ms/step - loss: 1.7668 - recall: 0.5558 - val_loss: 1.0888 - val_recall: 0.0473 - learning_rate: 0.0010 Epoch 2/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 258ms/step - loss: 0.3585 - recall: 0.8676 - val_loss: 0.8608 - val_recall: 0.3716 - learning_rate: 0.0010 Epoch 3/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 38s 261ms/step - loss: 0.2334 - recall: 0.9148 - val_loss: 0.7054 - val_recall: 0.6591 - learning_rate: 0.0010 Epoch 4/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 259ms/step - loss: 0.2119 - recall: 0.9237 - val_loss: 0.5743 - val_recall: 0.8089 - learning_rate: 0.0010 Epoch 5/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 260ms/step - loss: 0.2065 - recall: 0.9280 - val_loss: 0.6802 - val_recall: 0.8072 - learning_rate: 0.0010 Epoch 6/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 36s 254ms/step - loss: 0.1448 - recall: 0.9461 - val_loss: 0.8415 - val_recall: 0.8387 - learning_rate: 0.0010 Epoch 7/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 256ms/step - loss: 0.1309 - recall: 0.9561 - val_loss: 1.1974 - val_recall: 0.8107 - learning_rate: 0.0010 Epoch 8/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 253ms/step - loss: 0.0820 - recall: 0.9706 - val_loss: 0.9800 - val_recall: 0.8282 - learning_rate: 1.0000e-04 Epoch 9/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 38s 263ms/step - loss: 0.0649 - recall: 0.9801 - val_loss: 1.0222 - val_recall: 0.8309 - learning_rate: 1.0000e-04 Epoch 10/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 38s 261ms/step - loss: 0.0683 - recall: 0.9782 - val_loss: 1.0025 - val_recall: 0.8247 - learning_rate: 1.0000e-04 Epoch 11/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 260ms/step - loss: 0.0509 - recall: 0.9825 - val_loss: 0.9991 - val_recall: 0.8309 - learning_rate: 1.0000e-05 Epoch 12/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 258ms/step - loss: 0.0648 - recall: 0.9745 - val_loss: 0.9882 - val_recall: 0.8309 - learning_rate: 1.0000e-05 Epoch 13/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 254ms/step - loss: 0.0472 - recall: 0.9859 - val_loss: 0.9759 - val_recall: 0.8300 - learning_rate: 1.0000e-05 Epoch 14/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 257ms/step - loss: 0.0499 - recall: 0.9851 - val_loss: 0.9774 - val_recall: 0.8309 - learning_rate: 1.0000e-06
In [135]:
##################################
# Evaluating the model
# for a simple CNN with batch normalization regularization
# on the independent validation set
##################################
model_bnr_simple_y_pred = model_bnr_simple.predict(val_gen)
36/36 ━━━━━━━━━━━━━━━━━━━━ 3s 92ms/step
In [136]:
##################################
# Plotting the loss profile
# for a simple CNN with batch normalization regularization
# on the training and validation sets
##################################
plot_training_history(model_bnr_simple_history, 'Simple CNN With Batch Normalization Regularization : ')
In [137]:
##################################
# Consolidating the predictions
# for a simple CNN with batch normalization regularization
# on the validation set
##################################
model_bnr_simple_predictions = np.array(list(map(lambda x: np.argmax(x), model_bnr_simple_y_pred)))
model_bnr_simple_y_true=val_gen.classes
##################################
# Formulating the confusion matrix
# for a simple CNN with batch normalization regularization
# on the validation set
##################################
CMatrix = pd.DataFrame(confusion_matrix(model_bnr_simple_y_true, model_bnr_simple_predictions), columns=classes, index =classes)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with batch normalization regularization
# for each category of the validation set
##################################
plt.figure(figsize=(10, 6))
ax = sns.heatmap(CMatrix, annot = True, fmt = 'g' ,vmin = 0, vmax = 250, cmap = 'icefire')
ax.set_xlabel('Predicted',fontsize = 14,weight = 'bold')
ax.set_xticklabels(ax.get_xticklabels(),rotation =0)
ax.set_ylabel('Actual',fontsize = 14,weight = 'bold')
ax.set_yticklabels(ax.get_yticklabels(),rotation =0)
ax.set_title('Simple CNN With Batch Normalization Regularization : Validation Set Confusion Matrix',fontsize = 14, weight = 'bold', pad=20);
##################################
# Resetting all states generated by Keras
##################################
keras.backend.clear_session()
In [138]:
##################################
# Calculating the model accuracy
# for a simple CNN with batch normalization regularization
# for the entire validation set
##################################
model_bnr_simple_acc = accuracy_score(model_bnr_simple_y_true, model_bnr_simple_predictions)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with batch normalization regularization
# for the entire validation set
##################################
model_bnr_simple_results_all = precision_recall_fscore_support(model_bnr_simple_y_true, model_bnr_simple_predictions, average='macro', zero_division = 1)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with batch normalization regularization
# for each category of the validation set
##################################
model_bnr_simple_results_class = precision_recall_fscore_support(model_bnr_simple_y_true, model_bnr_simple_predictions, average=None, zero_division = 1)
##################################
# Consolidating all model evaluation metrics
# for a simple CNN with batch normalization regularization
##################################
metric_columns = ['Precision','Recall', 'F-Score','Support']
model_bnr_simple_all_df = pd.concat([pd.DataFrame(list(model_bnr_simple_results_class)).T,pd.DataFrame(list(model_bnr_simple_results_all)).T])
model_bnr_simple_all_df.columns = metric_columns
model_bnr_simple_all_df.index = ['No Tumor', 'Glioma', 'Meningioma', 'Pituitary', 'Total']
print('Simple CNN With Batch Normalization Regularization : Validation Set Classification Performance')
model_bnr_simple_all_df
Simple CNN With Batch Normalization Regularization : Validation Set Classification Performance
Out[138]:
| Precision | Recall | F-Score | Support | |
|---|---|---|---|---|
| No Tumor | 0.832317 | 0.855799 | 0.843895 | 319.0 |
| Glioma | 0.962025 | 0.863636 | 0.910180 | 264.0 |
| Meningioma | 0.690647 | 0.719101 | 0.704587 | 267.0 |
| Pituitary | 0.845638 | 0.865979 | 0.855688 | 291.0 |
| Total | 0.832657 | 0.826129 | 0.828587 | NaN |
In [139]:
##################################
# Consolidating all model evaluation metrics
# for a simple CNN with batch normalization regularization
##################################
model_bnr_simple_model_list = []
model_bnr_simple_measure_list = []
model_bnr_simple_category_list = []
model_bnr_simple_value_list = []
for i in range(3):
for j in range(5):
model_bnr_simple_model_list.append('CNN_BNR_Simple')
model_bnr_simple_measure_list.append(metric_columns[i])
model_bnr_simple_category_list.append(model_bnr_simple_all_df.index[j])
model_bnr_simple_value_list.append(model_bnr_simple_all_df.iloc[j,i])
model_bnr_simple_all_summary = pd.DataFrame(zip(model_bnr_simple_model_list,
model_bnr_simple_measure_list,
model_bnr_simple_category_list,
model_bnr_simple_value_list),
columns=['CNN.Model.Name',
'Model.Metric',
'Image.Category',
'Metric.Value'])
In [140]:
##################################
# Formulating the network architecture
# for a complex CNN with batch normalization regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_bnr_complex = Sequential(name="model_bnr_complex")
model_bnr_complex.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="bnr_complex_conv2d_0"))
model_bnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="bnr_complex_max_pooling2d_0"))
model_bnr_complex.add(Conv2D(filters=32, kernel_size=(3, 3), padding = 'Same', activation='relu', name="bnr_complex_conv2d_1"))
model_bnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="bnr_complex_max_pooling2d_1"))
model_bnr_complex.add(Conv2D(filters=64, kernel_size=(3, 3), padding = 'Same', activation='relu', name="bnr_complex_conv2d_2"))
model_bnr_complex.add(BatchNormalization(name="bnr_complex_batch_normalization"))
model_bnr_complex.add(Activation('relu', name="bnr_complex_activation"))
model_bnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="bnr_complex_max_pooling2d_2"))
model_bnr_complex.add(Flatten(name="bnr_complex_flatten"))
model_bnr_complex.add(Dense(units=128, activation='relu', name="bnr_complex_dense_0"))
model_bnr_complex.add(Dense(units=num_classes, activation='softmax', name="bnr_complex_dense_1"))
##################################
# Compiling the network layers
##################################
optimizer = Adam(learning_rate=0.001)
model_bnr_complex.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[Recall(name='recall')])
In [141]:
##################################
# Fitting the model
# for a complex CNN with batch normalization regularization
##################################
epochs = 20
set_seed()
model_bnr_complex_history = model_bnr_complex.fit(train_gen,
steps_per_epoch=len(train_gen)+1,
validation_steps=len(val_gen)+1,
validation_data=val_gen,
epochs=epochs,
verbose=1,
callbacks=[early_stopping, reduce_lr, bnr_complex_model_checkpoint])
Epoch 1/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 57s 385ms/step - loss: 2.4198 - recall: 0.4782 - val_loss: 1.1481 - val_recall: 0.0096 - learning_rate: 0.0010 Epoch 2/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 55s 383ms/step - loss: 0.3966 - recall: 0.8304 - val_loss: 0.9454 - val_recall: 0.1613 - learning_rate: 0.0010 Epoch 3/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 56s 392ms/step - loss: 0.2384 - recall: 0.9055 - val_loss: 0.7357 - val_recall: 0.5819 - learning_rate: 0.0010 Epoch 4/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 55s 383ms/step - loss: 0.2179 - recall: 0.9136 - val_loss: 0.6788 - val_recall: 0.7809 - learning_rate: 0.0010 Epoch 5/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 377ms/step - loss: 0.1693 - recall: 0.9332 - val_loss: 0.8541 - val_recall: 0.7064 - learning_rate: 0.0010 Epoch 6/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 374ms/step - loss: 0.1205 - recall: 0.9529 - val_loss: 0.8922 - val_recall: 0.7774 - learning_rate: 0.0010 Epoch 7/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 377ms/step - loss: 0.1140 - recall: 0.9631 - val_loss: 1.1084 - val_recall: 0.7695 - learning_rate: 0.0010 Epoch 8/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 377ms/step - loss: 0.0670 - recall: 0.9783 - val_loss: 0.8778 - val_recall: 0.8151 - learning_rate: 1.0000e-04 Epoch 9/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 377ms/step - loss: 0.0429 - recall: 0.9854 - val_loss: 0.8952 - val_recall: 0.8186 - learning_rate: 1.0000e-04 Epoch 10/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 378ms/step - loss: 0.0439 - recall: 0.9852 - val_loss: 0.8729 - val_recall: 0.8335 - learning_rate: 1.0000e-04
In [142]:
##################################
# Evaluating the model
# for a complex CNN with batch normalization regularization
# on the independent validation set
##################################
model_bnr_complex_y_pred = model_bnr_complex.predict(val_gen)
36/36 ━━━━━━━━━━━━━━━━━━━━ 5s 135ms/step
In [143]:
##################################
# Plotting the loss profile
# for a complex CNN with batch normalization regularization
# on the training and validation sets
##################################
plot_training_history(model_bnr_complex_history, 'Complex CNN With Batch Normalization Regularization : ')
In [144]:
##################################
# Consolidating the predictions
# for a complex CNN with batch normalization regularization
# on the validation set
##################################
model_bnr_complex_predictions = np.array(list(map(lambda x: np.argmax(x), model_bnr_complex_y_pred)))
model_bnr_complex_y_true=val_gen.classes
##################################
# Formulating the confusion matrix
# for a complex CNN with batch normalization regularization
# on the validation set
##################################
CMatrix = pd.DataFrame(confusion_matrix(model_bnr_complex_y_true, model_bnr_complex_predictions), columns=classes, index =classes)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with batch normalization regularization
# for each category of the validation set
##################################
plt.figure(figsize=(10, 6))
ax = sns.heatmap(CMatrix, annot = True, fmt = 'g' ,vmin = 0, vmax = 250, cmap = 'icefire')
ax.set_xlabel('Predicted',fontsize = 14,weight = 'bold')
ax.set_xticklabels(ax.get_xticklabels(),rotation =0)
ax.set_ylabel('Actual',fontsize = 14,weight = 'bold')
ax.set_yticklabels(ax.get_yticklabels(),rotation =0)
ax.set_title('Complex CNN With Batch Normalization Regularization : Validation Set Confusion Matrix',fontsize = 14, weight = 'bold', pad=20);
##################################
# Resetting all states generated by Keras
##################################
keras.backend.clear_session()
In [145]:
##################################
# Calculating the model accuracy
# for a complex CNN with batch normalization regularization
# for the entire validation set
##################################
model_bnr_complex_acc = accuracy_score(model_bnr_complex_y_true, model_bnr_complex_predictions)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with batch normalization regularization
# for the entire validation set
##################################
model_bnr_complex_results_all = precision_recall_fscore_support(model_bnr_complex_y_true, model_bnr_complex_predictions, average='macro', zero_division = 1)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with batch normalization regularization
# for each category of the validation set
##################################
model_bnr_complex_results_class = precision_recall_fscore_support(model_bnr_complex_y_true, model_bnr_complex_predictions, average=None, zero_division = 1)
##################################
# Consolidating all model evaluation metrics
# for a complex CNN with batch normalization regularization
##################################
metric_columns = ['Precision','Recall', 'F-Score','Support']
model_bnr_complex_all_df = pd.concat([pd.DataFrame(list(model_bnr_complex_results_class)).T,pd.DataFrame(list(model_bnr_complex_results_all)).T])
model_bnr_complex_all_df.columns = metric_columns
model_bnr_complex_all_df.index = ['No Tumor', 'Glioma', 'Meningioma', 'Pituitary', 'Total']
print('Complex CNN With Batch Normalization Regularization : Validation Set Classification Performance')
model_bnr_complex_all_df
Complex CNN With Batch Normalization Regularization : Validation Set Classification Performance
Out[145]:
| Precision | Recall | F-Score | Support | |
|---|---|---|---|---|
| No Tumor | 0.833333 | 0.611285 | 0.705244 | 319.0 |
| Glioma | 0.623684 | 0.897727 | 0.736025 | 264.0 |
| Meningioma | 0.430070 | 0.460674 | 0.444846 | 267.0 |
| Pituitary | 0.780083 | 0.646048 | 0.706767 | 291.0 |
| Total | 0.666793 | 0.653934 | 0.648221 | NaN |
In [146]:
##################################
# Consolidating all model evaluation metrics
# for a complex CNN with batch normalization regularization
##################################
model_bnr_complex_model_list = []
model_bnr_complex_measure_list = []
model_bnr_complex_category_list = []
model_bnr_complex_value_list = []
for i in range(3):
for j in range(5):
model_bnr_complex_model_list.append('CNN_BNR_Complex')
model_bnr_complex_measure_list.append(metric_columns[i])
model_bnr_complex_category_list.append(model_bnr_complex_all_df.index[j])
model_bnr_complex_value_list.append(model_bnr_complex_all_df.iloc[j,i])
model_bnr_complex_all_summary = pd.DataFrame(zip(model_bnr_complex_model_list,
model_bnr_complex_measure_list,
model_bnr_complex_category_list,
model_bnr_complex_value_list),
columns=['CNN.Model.Name',
'Model.Metric',
'Image.Category',
'Metric.Value'])
1.6.7 CNN With Dropout and Batch Normalization Regularization Model Fitting | Hyperparameter Tuning | Validation ¶
In [147]:
##################################
# Formulating the network architecture
# for a simple CNN with dropout and batch normalization regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_cdrbnr_simple = Sequential(name="model_cdrbnr_simple")
model_cdrbnr_simple.add(Conv2D(filters=8, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="cdrbnr_simple_conv2d_0"))
model_cdrbnr_simple.add(MaxPooling2D(pool_size=(2, 2), name="cdrbnr_simple_max_pooling2d_0"))
model_cdrbnr_simple.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', name="cdrbnr_simple_conv2d_1"))
model_cdrbnr_simple.add(BatchNormalization(name="cdrbnr_simple_batch_normalization"))
model_cdrbnr_simple.add(Activation('relu', name="cdrbnr_simple_activation"))
model_cdrbnr_simple.add(MaxPooling2D(pool_size=(2, 2), name="cdrbnr_simple_max_pooling2d_1"))
model_cdrbnr_simple.add(Flatten(name="cdrbnr_simple_flatten"))
model_cdrbnr_simple.add(Dense(units=32, activation='relu', name="cdrbnr_simple_dense_0"))
model_cdrbnr_simple.add(Dropout(rate=0.30, name="cdrbnr_simple_dropout"))
model_cdrbnr_simple.add(Dense(units=num_classes, activation='softmax', name="cdrbnr_simple_dense_1"))
##################################
# Compiling the network layers
##################################
optimizer = Adam(learning_rate=0.001)
model_cdrbnr_simple.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[Recall(name='recall')])
In [148]:
##################################
# Fitting the model
# for a simple CNN with dropout and batch normalization regularization
##################################
epochs = 20
set_seed()
model_cdrbnr_simple_history = model_cdrbnr_simple.fit(train_gen,
steps_per_epoch=len(train_gen)+1,
validation_steps=len(val_gen)+1,
validation_data=val_gen,
epochs=epochs,
verbose=1,
callbacks=[early_stopping, reduce_lr, cdrbnr_simple_model_checkpoint])
Epoch 1/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 40s 265ms/step - loss: 1.6579 - recall: 0.1515 - val_loss: 1.3345 - val_recall: 0.0018 - learning_rate: 0.0010 Epoch 2/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 39s 268ms/step - loss: 1.0206 - recall: 0.3417 - val_loss: 1.1807 - val_recall: 0.0649 - learning_rate: 0.0010 Epoch 3/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 256ms/step - loss: 0.9324 - recall: 0.3955 - val_loss: 1.0523 - val_recall: 0.2366 - learning_rate: 0.0010 Epoch 4/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 38s 267ms/step - loss: 0.7758 - recall: 0.4966 - val_loss: 0.9607 - val_recall: 0.4137 - learning_rate: 0.0010 Epoch 5/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 259ms/step - loss: 0.7319 - recall: 0.5117 - val_loss: 1.0513 - val_recall: 0.4496 - learning_rate: 0.0010 Epoch 6/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 38s 264ms/step - loss: 0.6944 - recall: 0.5397 - val_loss: 1.0002 - val_recall: 0.5127 - learning_rate: 0.0010 Epoch 7/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 258ms/step - loss: 0.6810 - recall: 0.5275 - val_loss: 1.1606 - val_recall: 0.6056 - learning_rate: 0.0010 Epoch 8/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 37s 254ms/step - loss: 0.6298 - recall: 0.5520 - val_loss: 0.9720 - val_recall: 0.5951 - learning_rate: 1.0000e-04 Epoch 9/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 36s 251ms/step - loss: 0.5942 - recall: 0.5613 - val_loss: 0.9829 - val_recall: 0.5960 - learning_rate: 1.0000e-04 Epoch 10/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 36s 252ms/step - loss: 0.6268 - recall: 0.5480 - val_loss: 1.0679 - val_recall: 0.5942 - learning_rate: 1.0000e-04
In [149]:
##################################
# Evaluating the model
# for a simple CNN with dropout and batch normalization regularization
# on the independent validation set
##################################
model_cdrbnr_simple_y_pred = model_cdrbnr_simple.predict(val_gen)
36/36 ━━━━━━━━━━━━━━━━━━━━ 4s 99ms/step
In [150]:
##################################
# Plotting the loss profile
# for a simple CNN with dropout and batch normalization regularization
# on the training and validation sets
##################################
plot_training_history(model_cdrbnr_simple_history, 'Simple CNN With Dropout and Batch Normalization Regularization : ')
In [151]:
##################################
# Consolidating the predictions
# for a simple CNN with dropout and batch normalization regularization
# on the validation set
##################################
model_cdrbnr_simple_predictions = np.array(list(map(lambda x: np.argmax(x), model_cdrbnr_simple_y_pred)))
model_cdrbnr_simple_y_true=val_gen.classes
##################################
# Formulating the confusion matrix
# for a simple CNN with dropout and batch normalization regularization
# on the validation set
##################################
CMatrix = pd.DataFrame(confusion_matrix(model_cdrbnr_simple_y_true, model_cdrbnr_simple_predictions), columns=classes, index =classes)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with dropout and batch normalization regularization
# for each category of the validation set
##################################
plt.figure(figsize=(10, 6))
ax = sns.heatmap(CMatrix, annot = True, fmt = 'g' ,vmin = 0, vmax = 250, cmap = 'icefire')
ax.set_xlabel('Predicted',fontsize = 14,weight = 'bold')
ax.set_xticklabels(ax.get_xticklabels(),rotation =0)
ax.set_ylabel('Actual',fontsize = 14,weight = 'bold')
ax.set_yticklabels(ax.get_yticklabels(),rotation =0)
ax.set_title('Simple CNN With Dropout and Batch Normalization Regularization : Validation Set Confusion Matrix',fontsize = 14, weight = 'bold', pad=20);
##################################
# Resetting all states generated by Keras
##################################
keras.backend.clear_session()
In [152]:
##################################
# Calculating the model accuracy
# for a simple CNN with dropout and batch normalization regularization
# for the entire validation set
##################################
model_cdrbnr_simple_acc = accuracy_score(model_cdrbnr_simple_y_true, model_cdrbnr_simple_predictions)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with dropout and batch normalization regularization
# for the entire validation set
##################################
model_cdrbnr_simple_results_all = precision_recall_fscore_support(model_cdrbnr_simple_y_true, model_cdrbnr_simple_predictions, average='macro', zero_division = 1)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a simple CNN with dropout and batch normalization regularization
# for each category of the validation set
##################################
model_cdrbnr_simple_results_class = precision_recall_fscore_support(model_cdrbnr_simple_y_true, model_cdrbnr_simple_predictions, average=None, zero_division = 1)
##################################
# Consolidating all model evaluation metrics
# for a simple CNN with dropout and batch normalization regularization
##################################
metric_columns = ['Precision','Recall', 'F-Score','Support']
model_cdrbnr_simple_all_df = pd.concat([pd.DataFrame(list(model_cdrbnr_simple_results_class)).T,pd.DataFrame(list(model_cdrbnr_simple_results_all)).T])
model_cdrbnr_simple_all_df.columns = metric_columns
model_cdrbnr_simple_all_df.index = ['No Tumor', 'Glioma', 'Meningioma', 'Pituitary', 'Total']
print('Simple CNN With Dropout and Batch Normalization Regularization : Validation Set Classification Performance')
model_cdrbnr_simple_all_df
Simple CNN With Dropout and Batch Normalization Regularization : Validation Set Classification Performance
Out[152]:
| Precision | Recall | F-Score | Support | |
|---|---|---|---|---|
| No Tumor | 0.722656 | 0.579937 | 0.643478 | 319.0 |
| Glioma | 0.875000 | 0.026515 | 0.051471 | 264.0 |
| Meningioma | 0.313199 | 0.524345 | 0.392157 | 267.0 |
| Pituitary | 0.490698 | 0.725086 | 0.585298 | 291.0 |
| Total | 0.600388 | 0.463971 | 0.418101 | NaN |
In [153]:
##################################
# Formulating the network architecture
# for a complex CNN with dropout and batch normalization regularization
##################################
set_seed()
batch_size = 32
input_shape = (227, 227, 1)
model_cdrbnr_complex = Sequential(name="model_cdrbnr_complex")
model_cdrbnr_complex.add(Conv2D(filters=16, kernel_size=(3, 3), padding = 'Same', activation='relu', input_shape=(227, 227, 1), name="cdrbnr_complex_conv2d_0"))
model_cdrbnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="cdrbnr_complex_max_pooling2d_0"))
model_cdrbnr_complex.add(Conv2D(filters=32, kernel_size=(3, 3), padding = 'Same', activation='relu', name="cdrbnr_complex_conv2d_1"))
model_cdrbnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="cdrbnr_complex_max_pooling2d_1"))
model_cdrbnr_complex.add(Conv2D(filters=64, kernel_size=(3, 3), padding = 'Same', activation='relu', name="cdrbnr_complex_conv2d_2"))
model_cdrbnr_complex.add(BatchNormalization(name="cdrbnr_complex_batch_normalization"))
model_cdrbnr_complex.add(Activation('relu', name="cdrbnr_complex_activation"))
model_cdrbnr_complex.add(MaxPooling2D(pool_size=(2, 2), name="cdrbnr_complex_max_pooling2d_2"))
model_cdrbnr_complex.add(Flatten(name="cdrbnr_complex_flatten"))
model_cdrbnr_complex.add(Dense(units=128, activation='relu', name="cdrbnr_complex_dense_0"))
model_cdrbnr_complex.add(Dropout(rate=0.30, name="cdrbnr_complex_dropout"))
model_cdrbnr_complex.add(Dense(units=num_classes, activation='softmax', name="cdrbnr_complex_dense_1"))
##################################
# Compiling the network layers
##################################
optimizer = Adam(learning_rate=0.001)
model_cdrbnr_complex.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[Recall(name='recall')])
In [154]:
##################################
# Fitting the model
# for a complex CNN with dropout and batch normalization regularization
##################################
epochs = 20
set_seed()
model_cdrbnr_complex_history = model_cdrbnr_complex.fit(train_gen,
steps_per_epoch=len(train_gen)+1,
validation_steps=len(val_gen)+1,
validation_data=val_gen,
epochs=epochs,
verbose=1,
callbacks=[early_stopping, reduce_lr, cdrbnr_complex_model_checkpoint])
Epoch 1/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 58s 393ms/step - loss: 1.7995 - recall: 0.5219 - val_loss: 1.1321 - val_recall: 0.0342 - learning_rate: 0.0010 Epoch 2/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 56s 386ms/step - loss: 0.3938 - recall: 0.8333 - val_loss: 0.9887 - val_recall: 0.0649 - learning_rate: 0.0010 Epoch 3/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 378ms/step - loss: 0.2484 - recall: 0.8988 - val_loss: 0.6290 - val_recall: 0.6713 - learning_rate: 0.0010 Epoch 4/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 55s 381ms/step - loss: 0.2268 - recall: 0.9093 - val_loss: 0.6252 - val_recall: 0.7555 - learning_rate: 0.0010 Epoch 5/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 56s 390ms/step - loss: 0.1590 - recall: 0.9359 - val_loss: 0.8430 - val_recall: 0.7046 - learning_rate: 0.0010 Epoch 6/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 112s 778ms/step - loss: 0.1436 - recall: 0.9409 - val_loss: 0.5680 - val_recall: 0.8352 - learning_rate: 0.0010 Epoch 7/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 377ms/step - loss: 0.1110 - recall: 0.9563 - val_loss: 0.7335 - val_recall: 0.8344 - learning_rate: 0.0010 Epoch 8/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 374ms/step - loss: 0.1024 - recall: 0.9607 - val_loss: 0.9613 - val_recall: 0.8291 - learning_rate: 0.0010 Epoch 9/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 377ms/step - loss: 0.1047 - recall: 0.9615 - val_loss: 0.6784 - val_recall: 0.8475 - learning_rate: 0.0010 Epoch 10/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 377ms/step - loss: 0.0612 - recall: 0.9779 - val_loss: 0.7055 - val_recall: 0.8580 - learning_rate: 1.0000e-04 Epoch 11/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 374ms/step - loss: 0.0465 - recall: 0.9825 - val_loss: 0.7504 - val_recall: 0.8615 - learning_rate: 1.0000e-04 Epoch 12/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 377ms/step - loss: 0.0426 - recall: 0.9825 - val_loss: 0.8035 - val_recall: 0.8624 - learning_rate: 1.0000e-04 Epoch 13/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 373ms/step - loss: 0.0373 - recall: 0.9885 - val_loss: 0.7971 - val_recall: 0.8624 - learning_rate: 1.0000e-05 Epoch 14/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 55s 380ms/step - loss: 0.0427 - recall: 0.9842 - val_loss: 0.7896 - val_recall: 0.8606 - learning_rate: 1.0000e-05 Epoch 15/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 55s 379ms/step - loss: 0.0323 - recall: 0.9903 - val_loss: 0.7911 - val_recall: 0.8606 - learning_rate: 1.0000e-05 Epoch 16/20 144/144 ━━━━━━━━━━━━━━━━━━━━ 54s 375ms/step - loss: 0.0418 - recall: 0.9818 - val_loss: 0.7901 - val_recall: 0.8606 - learning_rate: 1.0000e-06
In [155]:
##################################
# Consolidating all model evaluation metrics
# for a simple CNN with dropout and batch normalization regularization
##################################
model_cdrbnr_simple_model_list = []
model_cdrbnr_simple_measure_list = []
model_cdrbnr_simple_category_list = []
model_cdrbnr_simple_value_list = []
for i in range(3):
for j in range(5):
model_cdrbnr_simple_model_list.append('CNN_CDRBNR_Simple')
model_cdrbnr_simple_measure_list.append(metric_columns[i])
model_cdrbnr_simple_category_list.append(model_cdrbnr_simple_all_df.index[j])
model_cdrbnr_simple_value_list.append(model_cdrbnr_simple_all_df.iloc[j,i])
model_cdrbnr_simple_all_summary = pd.DataFrame(zip(model_cdrbnr_simple_model_list,
model_cdrbnr_simple_measure_list,
model_cdrbnr_simple_category_list,
model_cdrbnr_simple_value_list),
columns=['CNN.Model.Name',
'Model.Metric',
'Image.Category',
'Metric.Value'])
In [156]:
##################################
# Evaluating the model
# for a complex CNN with dropout and batch normalization regularization
# on the independent validation set
##################################
model_cdrbnr_complex_y_pred = model_cdrbnr_complex.predict(val_gen)
36/36 ━━━━━━━━━━━━━━━━━━━━ 5s 133ms/step
In [157]:
##################################
# Plotting the loss profile
# for a complex CNN with dropout and batch normalization regularization
# on the training and validation sets
##################################
plot_training_history(model_cdrbnr_complex_history, 'Complex CNN With Dropout and Batch Normalization Regularization : ')
In [158]:
##################################
# Consolidating the predictions
# for a complex CNN with dropout and batch normalization regularization
# on the validation set
##################################
model_cdrbnr_complex_predictions = np.array(list(map(lambda x: np.argmax(x), model_cdrbnr_complex_y_pred)))
model_cdrbnr_complex_y_true=val_gen.classes
##################################
# Formulating the confusion matrix
# for a complex CNN with dropout and batch normalization regularization
# on the validation set
##################################
CMatrix = pd.DataFrame(confusion_matrix(model_cdrbnr_complex_y_true, model_cdrbnr_complex_predictions), columns=classes, index =classes)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with dropout and batch normalization regularization
# for each category of the validation set
##################################
plt.figure(figsize=(10, 6))
ax = sns.heatmap(CMatrix, annot = True, fmt = 'g' ,vmin = 0, vmax = 250, cmap = 'icefire')
ax.set_xlabel('Predicted',fontsize = 14,weight = 'bold')
ax.set_xticklabels(ax.get_xticklabels(),rotation =0)
ax.set_ylabel('Actual',fontsize = 14,weight = 'bold')
ax.set_yticklabels(ax.get_yticklabels(),rotation =0)
ax.set_title('Complex CNN With Dropout and Batch Normalization Regularization : Validation Set Confusion Matrix',fontsize = 14, weight = 'bold', pad=20);
##################################
# Resetting all states generated by Keras
##################################
keras.backend.clear_session()
In [159]:
##################################
# Calculating the model accuracy
# for a complex CNN with dropout and batch normalization regularization
# for the entire validation set
##################################
model_cdrbnr_complex_acc = accuracy_score(model_cdrbnr_complex_y_true, model_cdrbnr_complex_predictions)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with dropout and batch normalization regularization
# for the entire validation set
##################################
model_cdrbnr_complex_results_all = precision_recall_fscore_support(model_cdrbnr_complex_y_true, model_cdrbnr_complex_predictions, average='macro', zero_division = 1)
##################################
# Calculating the model
# Precision, Recall, F-score and Support
# for a complex CNN with dropout and batch normalization regularization
# for each category of the validation set
##################################
model_cdrbnr_complex_results_class = precision_recall_fscore_support(model_cdrbnr_complex_y_true, model_cdrbnr_complex_predictions, average=None, zero_division = 1)
##################################
# Consolidating all model evaluation metrics
# for a complex CNN with dropout and batch normalization regularization
##################################
metric_columns = ['Precision','Recall', 'F-Score','Support']
model_cdrbnr_complex_all_df = pd.concat([pd.DataFrame(list(model_cdrbnr_complex_results_class)).T,pd.DataFrame(list(model_cdrbnr_complex_results_all)).T])
model_cdrbnr_complex_all_df.columns = metric_columns
model_cdrbnr_complex_all_df.index = ['No Tumor', 'Glioma', 'Meningioma', 'Pituitary', 'Total']
print('Complex CNN With Dropout and Batch Normalization Regularization : Validation Set Classification Performance')
model_cdrbnr_complex_all_df
Complex CNN With Dropout and Batch Normalization Regularization : Validation Set Classification Performance
Out[159]:
| Precision | Recall | F-Score | Support | |
|---|---|---|---|---|
| No Tumor | 0.848765 | 0.862069 | 0.855365 | 319.0 |
| Glioma | 0.923077 | 0.818182 | 0.867470 | 264.0 |
| Meningioma | 0.753968 | 0.711610 | 0.732177 | 267.0 |
| Pituitary | 0.845921 | 0.962199 | 0.900322 | 291.0 |
| Total | 0.842933 | 0.838515 | 0.838834 | NaN |
In [160]:
##################################
# Consolidating all model evaluation metrics
# for a complex CNN with dropout and batch normalization regularization
##################################
model_cdrbnr_complex_model_list = []
model_cdrbnr_complex_measure_list = []
model_cdrbnr_complex_category_list = []
model_cdrbnr_complex_value_list = []
for i in range(3):
for j in range(5):
model_cdrbnr_complex_model_list.append('CNN_CDRBNR_Complex')
model_cdrbnr_complex_measure_list.append(metric_columns[i])
model_cdrbnr_complex_category_list.append(model_cdrbnr_complex_all_df.index[j])
model_cdrbnr_complex_value_list.append(model_cdrbnr_complex_all_df.iloc[j,i])
model_cdrbnr_complex_all_summary = pd.DataFrame(zip(model_cdrbnr_complex_model_list,
model_cdrbnr_complex_measure_list,
model_cdrbnr_complex_category_list,
model_cdrbnr_complex_value_list),
columns=['CNN.Model.Name',
'Model.Metric',
'Image.Category',
'Metric.Value'])
1.6.8 Model Selection ¶
In [161]:
##################################
# Consolidating all the
# CNN model performance measures
##################################
cnn_model_performance_comparison = pd.concat([model_nr_simple_all_summary,
model_nr_complex_all_summary,
model_dr_simple_all_summary,
model_dr_complex_all_summary,
model_bnr_simple_all_summary,
model_bnr_complex_all_summary,
model_cdrbnr_simple_all_summary,
model_cdrbnr_complex_all_summary],
ignore_index=True)
In [162]:
##################################
# Consolidating all the precision
# model performance measures
##################################
cnn_model_performance_comparison_precision = cnn_model_performance_comparison[cnn_model_performance_comparison['Model.Metric']=='Precision']
cnn_model_performance_comparison_precision_CNN_NR_Simple = cnn_model_performance_comparison_precision[cnn_model_performance_comparison_precision['CNN.Model.Name']=='CNN_NR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_precision_CNN_NR_Complex = cnn_model_performance_comparison_precision[cnn_model_performance_comparison_precision['CNN.Model.Name']=='CNN_NR_Complex'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_precision_CNN_DR_Simple = cnn_model_performance_comparison_precision[cnn_model_performance_comparison_precision['CNN.Model.Name']=='CNN_DR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_precision_CNN_DR_Complex = cnn_model_performance_comparison_precision[cnn_model_performance_comparison_precision['CNN.Model.Name']=='CNN_DR_Complex'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_precision_CNN_BNR_Simple = cnn_model_performance_comparison_precision[cnn_model_performance_comparison_precision['CNN.Model.Name']=='CNN_BNR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_precision_CNN_BNR_Complex = cnn_model_performance_comparison_precision[cnn_model_performance_comparison_precision['CNN.Model.Name']=='CNN_BNR_Complex'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_precision_CNN_CDRBNR_Simple = cnn_model_performance_comparison_precision[cnn_model_performance_comparison_precision['CNN.Model.Name']=='CNN_CDRBNR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_precision_CNN_CDRBNR_Complex = cnn_model_performance_comparison_precision[cnn_model_performance_comparison_precision['CNN.Model.Name']=='CNN_CDRBNR_Complex'].loc[:,"Metric.Value"]
In [163]:
##################################
# Combining all the precision
# model performance measures
# for all CNN models
##################################
cnn_model_performance_comparison_precision_plot = pd.DataFrame({'CNN_NR_Simple': cnn_model_performance_comparison_precision_CNN_NR_Simple.values,
'CNN_NR_Complex': cnn_model_performance_comparison_precision_CNN_NR_Complex.values,
'CNN_DR_Simple': cnn_model_performance_comparison_precision_CNN_DR_Simple.values,
'CNN_DR_Complex': cnn_model_performance_comparison_precision_CNN_DR_Complex.values,
'CNN_BNR_Simple': cnn_model_performance_comparison_precision_CNN_BNR_Simple.values,
'CNN_BNR_Complex': cnn_model_performance_comparison_precision_CNN_BNR_Complex.values,
'CNN_CDRBNR_Simple': cnn_model_performance_comparison_precision_CNN_CDRBNR_Simple.values,
'CNN_CDRBNR_Complex': cnn_model_performance_comparison_precision_CNN_CDRBNR_Complex.values},
index=cnn_model_performance_comparison_precision['Image.Category'].unique())
cnn_model_performance_comparison_precision_plot
Out[163]:
| CNN_NR_Simple | CNN_NR_Complex | CNN_DR_Simple | CNN_DR_Complex | CNN_BNR_Simple | CNN_BNR_Complex | CNN_CDRBNR_Simple | CNN_CDRBNR_Complex | |
|---|---|---|---|---|---|---|---|---|
| No Tumor | 0.893238 | 0.863057 | 0.852459 | 0.835913 | 0.832317 | 0.833333 | 0.722656 | 0.848765 |
| Glioma | 0.928571 | 0.871486 | 0.902778 | 0.858238 | 0.962025 | 0.623684 | 0.875000 | 0.923077 |
| Meningioma | 0.624573 | 0.655602 | 0.554054 | 0.641975 | 0.690647 | 0.430070 | 0.313199 | 0.753968 |
| Pituitary | 0.772595 | 0.795252 | 0.774691 | 0.815287 | 0.845638 | 0.780083 | 0.490698 | 0.845921 |
| Total | 0.804744 | 0.796349 | 0.770996 | 0.787853 | 0.832657 | 0.666793 | 0.600388 | 0.842933 |
In [164]:
##################################
# Plotting all the precision
# model performance measures
# for all CNN models
##################################
cnn_model_performance_comparison_precision_plot = cnn_model_performance_comparison_precision_plot.plot.barh(figsize=(10, 12), width=0.90)
cnn_model_performance_comparison_precision_plot.set_xlim(0.00,1.00)
cnn_model_performance_comparison_precision_plot.set_title("Model Comparison by Precision Performance on Validation Data")
cnn_model_performance_comparison_precision_plot.set_xlabel("Precision Performance")
cnn_model_performance_comparison_precision_plot.set_ylabel("Image Categories")
cnn_model_performance_comparison_precision_plot.grid(False)
cnn_model_performance_comparison_precision_plot.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
for container in cnn_model_performance_comparison_precision_plot.containers:
cnn_model_performance_comparison_precision_plot.bar_label(container, fmt='%.5f', padding=-50, color='white', fontweight='bold')
In [165]:
##################################
# Consolidating all the recall
# model performance measures
##################################
cnn_model_performance_comparison_recall = cnn_model_performance_comparison[cnn_model_performance_comparison['Model.Metric']=='Recall']
cnn_model_performance_comparison_recall_CNN_NR_Simple = cnn_model_performance_comparison_recall[cnn_model_performance_comparison_recall['CNN.Model.Name']=='CNN_NR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_recall_CNN_NR_Complex = cnn_model_performance_comparison_recall[cnn_model_performance_comparison_recall['CNN.Model.Name']=='CNN_NR_Complex'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_recall_CNN_DR_Simple = cnn_model_performance_comparison_recall[cnn_model_performance_comparison_recall['CNN.Model.Name']=='CNN_DR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_recall_CNN_DR_Complex = cnn_model_performance_comparison_recall[cnn_model_performance_comparison_recall['CNN.Model.Name']=='CNN_DR_Complex'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_recall_CNN_BNR_Simple = cnn_model_performance_comparison_recall[cnn_model_performance_comparison_recall['CNN.Model.Name']=='CNN_BNR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_recall_CNN_BNR_Complex = cnn_model_performance_comparison_recall[cnn_model_performance_comparison_recall['CNN.Model.Name']=='CNN_BNR_Complex'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_recall_CNN_CDRBNR_Simple = cnn_model_performance_comparison_recall[cnn_model_performance_comparison_recall['CNN.Model.Name']=='CNN_CDRBNR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_recall_CNN_CDRBNR_Complex = cnn_model_performance_comparison_recall[cnn_model_performance_comparison_recall['CNN.Model.Name']=='CNN_CDRBNR_Complex'].loc[:,"Metric.Value"]
In [166]:
##################################
# Combining all the recall
# model performance measures
# for all CNN models
##################################
cnn_model_performance_comparison_recall_plot = pd.DataFrame({'CNN_NR_Simple': cnn_model_performance_comparison_recall_CNN_NR_Simple.values,
'CNN_NR_Complex': cnn_model_performance_comparison_recall_CNN_NR_Complex.values,
'CNN_DR_Simple': cnn_model_performance_comparison_recall_CNN_DR_Simple.values,
'CNN_DR_Complex': cnn_model_performance_comparison_recall_CNN_DR_Complex.values,
'CNN_BNR_Simple': cnn_model_performance_comparison_recall_CNN_BNR_Simple.values,
'CNN_BNR_Complex': cnn_model_performance_comparison_recall_CNN_BNR_Complex.values,
'CNN_CDRBNR_Simple': cnn_model_performance_comparison_recall_CNN_CDRBNR_Simple.values,
'CNN_CDRBNR_Complex': cnn_model_performance_comparison_recall_CNN_CDRBNR_Complex.values},
index=cnn_model_performance_comparison_recall['Image.Category'].unique())
cnn_model_performance_comparison_recall_plot
Out[166]:
| CNN_NR_Simple | CNN_NR_Complex | CNN_DR_Simple | CNN_DR_Complex | CNN_BNR_Simple | CNN_BNR_Complex | CNN_CDRBNR_Simple | CNN_CDRBNR_Complex | |
|---|---|---|---|---|---|---|---|---|
| No Tumor | 0.786834 | 0.849530 | 0.815047 | 0.846395 | 0.855799 | 0.611285 | 0.579937 | 0.862069 |
| Glioma | 0.787879 | 0.821970 | 0.738636 | 0.848485 | 0.863636 | 0.897727 | 0.026515 | 0.818182 |
| Meningioma | 0.685393 | 0.591760 | 0.614232 | 0.584270 | 0.719101 | 0.460674 | 0.524345 | 0.711610 |
| Pituitary | 0.910653 | 0.920962 | 0.862543 | 0.879725 | 0.865979 | 0.646048 | 0.725086 | 0.962199 |
| Total | 0.792690 | 0.796055 | 0.757615 | 0.789719 | 0.826129 | 0.653934 | 0.463971 | 0.838515 |
In [167]:
##################################
# Plotting all the recall
# model performance measures
# for all CNN models
##################################
cnn_model_performance_comparison_recall_plot = cnn_model_performance_comparison_recall_plot.plot.barh(figsize=(10, 12), width=0.90)
cnn_model_performance_comparison_recall_plot.set_xlim(0.00,1.00)
cnn_model_performance_comparison_recall_plot.set_title("Model Comparison by Recall Performance on Validation Data")
cnn_model_performance_comparison_recall_plot.set_xlabel("Recall Performance")
cnn_model_performance_comparison_recall_plot.set_ylabel("Image Categories")
cnn_model_performance_comparison_recall_plot.grid(False)
cnn_model_performance_comparison_recall_plot.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
for container in cnn_model_performance_comparison_recall_plot.containers:
cnn_model_performance_comparison_recall_plot.bar_label(container, fmt='%.5f', padding=-50, color='white', fontweight='bold')
In [168]:
##################################
# Consolidating all the fscore
# model performance measures
##################################
cnn_model_performance_comparison_fscore = cnn_model_performance_comparison[cnn_model_performance_comparison['Model.Metric']=='F-Score']
cnn_model_performance_comparison_fscore_CNN_NR_Simple = cnn_model_performance_comparison_fscore[cnn_model_performance_comparison_fscore['CNN.Model.Name']=='CNN_NR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_fscore_CNN_NR_Complex = cnn_model_performance_comparison_fscore[cnn_model_performance_comparison_fscore['CNN.Model.Name']=='CNN_NR_Complex'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_fscore_CNN_DR_Simple = cnn_model_performance_comparison_fscore[cnn_model_performance_comparison_fscore['CNN.Model.Name']=='CNN_DR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_fscore_CNN_DR_Complex = cnn_model_performance_comparison_fscore[cnn_model_performance_comparison_fscore['CNN.Model.Name']=='CNN_DR_Complex'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_fscore_CNN_BNR_Simple = cnn_model_performance_comparison_fscore[cnn_model_performance_comparison_fscore['CNN.Model.Name']=='CNN_BNR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_fscore_CNN_BNR_Complex = cnn_model_performance_comparison_fscore[cnn_model_performance_comparison_fscore['CNN.Model.Name']=='CNN_BNR_Complex'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_fscore_CNN_CDRBNR_Simple = cnn_model_performance_comparison_fscore[cnn_model_performance_comparison_fscore['CNN.Model.Name']=='CNN_CDRBNR_Simple'].loc[:,"Metric.Value"]
cnn_model_performance_comparison_fscore_CNN_CDRBNR_Complex = cnn_model_performance_comparison_fscore[cnn_model_performance_comparison_fscore['CNN.Model.Name']=='CNN_CDRBNR_Complex'].loc[:,"Metric.Value"]
In [169]:
##################################
# Combining all the fscore
# model performance measures
# for all CNN models
##################################
cnn_model_performance_comparison_fscore_plot = pd.DataFrame({'CNN_NR_Simple': cnn_model_performance_comparison_fscore_CNN_NR_Simple.values,
'CNN_NR_Complex': cnn_model_performance_comparison_fscore_CNN_NR_Complex.values,
'CNN_DR_Simple': cnn_model_performance_comparison_fscore_CNN_DR_Simple.values,
'CNN_DR_Complex': cnn_model_performance_comparison_fscore_CNN_DR_Complex.values,
'CNN_BNR_Simple': cnn_model_performance_comparison_fscore_CNN_BNR_Simple.values,
'CNN_BNR_Complex': cnn_model_performance_comparison_fscore_CNN_BNR_Complex.values,
'CNN_CDRBNR_Simple': cnn_model_performance_comparison_fscore_CNN_CDRBNR_Simple.values,
'CNN_CDRBNR_Complex': cnn_model_performance_comparison_fscore_CNN_CDRBNR_Complex.values},
index=cnn_model_performance_comparison_fscore['Image.Category'].unique())
cnn_model_performance_comparison_fscore_plot
Out[169]:
| CNN_NR_Simple | CNN_NR_Complex | CNN_DR_Simple | CNN_DR_Complex | CNN_BNR_Simple | CNN_BNR_Complex | CNN_CDRBNR_Simple | CNN_CDRBNR_Complex | |
|---|---|---|---|---|---|---|---|---|
| No Tumor | 0.836667 | 0.856240 | 0.833333 | 0.841121 | 0.843895 | 0.705244 | 0.643478 | 0.855365 |
| Glioma | 0.852459 | 0.846004 | 0.812500 | 0.853333 | 0.910180 | 0.736025 | 0.051471 | 0.867470 |
| Meningioma | 0.653571 | 0.622047 | 0.582593 | 0.611765 | 0.704587 | 0.444846 | 0.392157 | 0.732177 |
| Pituitary | 0.835962 | 0.853503 | 0.816260 | 0.846281 | 0.855688 | 0.706767 | 0.585298 | 0.900322 |
| Total | 0.794665 | 0.794449 | 0.761172 | 0.788125 | 0.828587 | 0.648221 | 0.418101 | 0.838834 |
In [170]:
##################################
# Plotting all the fscore
# model performance measures
# for all CNN models
##################################
cnn_model_performance_comparison_fscore_plot = cnn_model_performance_comparison_fscore_plot.plot.barh(figsize=(10, 12), width=0.90)
cnn_model_performance_comparison_fscore_plot.set_xlim(0.00,1.00)
cnn_model_performance_comparison_fscore_plot.set_title("Model Comparison by Fscore Performance on Validation Data")
cnn_model_performance_comparison_fscore_plot.set_xlabel("Fscore Performance")
cnn_model_performance_comparison_fscore_plot.set_ylabel("Image Categories")
cnn_model_performance_comparison_fscore_plot.grid(False)
cnn_model_performance_comparison_fscore_plot.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
for container in cnn_model_performance_comparison_fscore_plot.containers:
cnn_model_performance_comparison_fscore_plot.bar_label(container, fmt='%.5f', padding=-50, color='white', fontweight='bold')
1.6.9 Model Testing ¶
1.6.10 Model Inference ¶
1.7 Predictive Model Development ¶
1.7.1 Model Application Programming Interface Code Development ¶
1.7.2 User Interface Application Code Development ¶
1.7.3 Web Application ¶
2. Summary ¶
3. References ¶
- [Book] Deep Learning with Python by Francois Chollet
- [Book] Deep Learning: A Visual Approach by Andrew Glassner
- [Book] Learning Deep Learning by Magnus Ekman
- [Book] Practical Deep Learning by Ronald Kneusel
- [Book] Deep Learning with Tensorflow and Keras by Amita Kapoor, Antonio Gulli and Sujit Pal
- [Book] Deep Learning by John Kelleher
- [Book] Generative Deep Learning by David Foster
- [Book] Deep Learning Illustrated by John Krohn, Grant Beyleveld and Aglae Bassens
- [Book] Neural Networks and Deep Learning by Charu Aggarwal
- [Book] Grokking Deep Learning by Andrew Trask
- [Book] Deep Learning with Pytorch by Eli Stevens, Luca Antiga and Thomas Viehmann
- [Book] Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville
- [Book] Deep Learning from Scratch by Seth Weidman
- [Book] Fundamentals of Deep Learning by Nithin Buduma, Nikhil Buduma and Joe Papa
- [Book] Hands-On Machine Learning with Scikit-Learn, Keras and Tensorflow by Aurelien Geron
- [Book] Deep Learning for Computer Vision by Jason Brownlee
- [Python Library API] numpy by NumPy Team
- [Python Library API] pandas by Pandas Team
- [Python Library API] seaborn by Seaborn Team
- [Python Library API] matplotlib.pyplot by MatPlotLib Team
- [Python Library API] matplotlib.image by MatPlotLib Team
- [Python Library API] matplotlib.offsetbox by MatPlotLib Team
- [Python Library API] tensorflow by TensorFlow Team
- [Python Library API] keras by Keras Team
- [Python Library API] pil by Pillow Team
- [Python Library API] glob by glob Team
- [Python Library API] cv2 by OpenCV Team
- [Python Library API] os by os Team
- [Python Library API] random by random Team
- [Python Library API] keras.models by TensorFlow Team
- [Python Library API] keras.layers by TensorFlow Team
- [Python Library API] keras.wrappers by TensorFlow Team
- [Python Library API] keras.utils by TensorFlow Team
- [Python Library API] keras.optimizers by TensorFlow Team
- [Python Library API] keras.preprocessing.image by TensorFlow Team
- [Python Library API] keras.callbacks by TensorFlow Team
- [Python Library API] keras.metrics by TensorFlow Team
- [Python Library API] sklearn.metrics by Scikit-Learn Team
- [Article] Convolutional Neural Networks, Explained by Mayank Mishra (Towards Data Science)
- [Article] A Comprehensive Guide to Convolutional Neural Networks — the ELI5 way by Sumit Saha (Towards Data Science)
- [Article] Understanding Convolutional Neural Networks: A Beginner’s Journey into the Architecture by Afaque Umer (Medium)
- [Article] Introduction to Convolutional Neural Networks (CNN) by Manav Mandal (Analytics Vidhya)
- [Article] What Are Convolutional Neural Networks? by IBM Team (IBM)
- [Article] What is CNN? A 5 Year Old guide to Convolutional Neural Network by William Ong (Medium)
- [Article] Convolutional Neural Network by Thomas Wood (DeepAI.Org)
- [Article] How Do Convolutional Layers Work in Deep Learning Neural Networks? by Jason Brownlee (Machine Learning Mastery)
- [Article] Convolutional Neural Networks Explained: Using PyTorch to Understand CNNs by Vihar Kurama (BuiltIn)
- [Article] Convolutional Neural Networks Cheatsheet by Afshine Amidi and Shervine Amidi (Stanford University)
- [Article] An Intuitive Explanation of Convolutional Neural Networks by Ujjwal Karn (The Data Science Blog)
- [Article] Convolutional Neural Network by NVIDIA Team (NVIDIA)
- [Article] Convolutional Neural Networks (CNN) Overview by Nikolaj Buhl (Encord)
- [Article] Understanding Convolutional Neural Network (CNN): A Complete Guide by LearnOpenCV Team (LearnOpenCV)
- [Article] Convolutional Neural Networks (CNNs) and Layer Types by Adrian Rosebrock (PyImageSearch)
- [Article] How Convolutional Neural Networks See The World by Francois Chollet (The Keras Blog)
- [Article] What Is a Convolutional Neural Network? by MathWorks Team (MathWorks)
- [Article] Grad-CAM Class Activation Visualization by Francois Chollet (Keras.IO)
- [Article] Grad-CAM: Visualize Class Activation Maps with Keras, TensorFlow, and Deep Learning by Adrian Rosebrock (PyImageSearch)
- [Kaggle Project] glioma 19 Radiography Data - EDA and CNN Model by Juliana Negrini De Araujo (Kaggle)
- [Kaggle Project] Pneumonia Detection using CNN (92.6% Accuracy) by Madhav Mathur (Kaggle)
- [Kaggle Project] glioma Detection from CXR Using Explainable CNN by Manu Siddhartha (Kaggle)
- [Kaggle Project] Class Activation Mapping for glioma-19 CNN by Amy Zhang (Kaggle)
- [Kaggle Project] CNN mri glioma Classification by Gabriel Mino (Kaggle)
- [Kaggle Project] Detecting-glioma-19-Images | CNN by Felipe Oliveira (Kaggle)
- [Kaggle Project] Detection of glioma Positive Cases using DL by Sana Shaikh (Kaggle)
- [Kaggle Project] Deep Learning and Transfer Learning on glioma-19 by Digvijay Yadav (Kaggle)
- [Kaggle Project] X-ray Detecting Using CNN by Shivan Kumar (Kaggle)
- [Kaggle Project] Classification of glioma-19 using CNN by Islam Selim (Kaggle)
- [Kaggle Project] glioma-19 - Revisiting Pneumonia Detection by Paulo Breviglieri (Kaggle)
- [Kaggle Project] Multi-Class X-ray glioma19 Classification-94% Accurary by Quadeer Shaikh (Kaggle)
- [Kaggle Project] Grad-CAM: What Do CNNs See? by Derrel Souza (Kaggle)
- [GitHub Project] Grad-CAM by Ismail Uddin (GitHub)
- [Publication] Gradient-Based Learning Applied to Document Recognition by Yann LeCun, Leon Bottou, Yoshua Bengio and Patrick Haffner (Proceedings of the IEEE)
- [Publication] Learning Deep Features for Discriminative Localization by Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva and Antonio Torralba (Computer Vision and Pattern Recognition)
- [Publication] Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization by Ramprasaath Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh and Dhruv Batra (Computer Vision and Pattern Recognition)
- [Course] IBM Data Analyst Professional Certificate by IBM Team (Coursera)
- [Course] IBM Data Science Professional Certificate by IBM Team (Coursera)
- [Course] IBM Machine Learning Professional Certificate by IBM Team (Coursera)
In [171]:
from IPython.display import display, HTML
display(HTML("<style>.rendered_html { font-size: 15px; font-family: 'Trebuchet MS'; }</style>"))